00:00:00.000 Started by upstream project "autotest-per-patch" build number 132087 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.022 The recommended git tool is: git 00:00:00.022 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.056 Fetching changes from the remote Git repository 00:00:00.058 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.100 Using shallow fetch with depth 1 00:00:00.100 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.100 > git --version # timeout=10 00:00:00.140 > git --version # 'git version 2.39.2' 00:00:00.140 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.169 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.169 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.290 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.301 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.312 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.312 > git config core.sparsecheckout # timeout=10 00:00:03.324 > git read-tree -mu HEAD # timeout=10 00:00:03.340 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.359 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.360 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.445 [Pipeline] Start of Pipeline 00:00:03.458 [Pipeline] library 00:00:03.460 Loading library shm_lib@master 00:00:03.460 Library shm_lib@master is cached. Copying from home. 00:00:03.476 [Pipeline] node 00:00:18.478 Still waiting to schedule task 00:00:18.479 Waiting for next available executor on ‘vagrant-vm-host’ 00:02:20.089 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest 00:02:20.092 [Pipeline] { 00:02:20.102 [Pipeline] catchError 00:02:20.104 [Pipeline] { 00:02:20.120 [Pipeline] wrap 00:02:20.129 [Pipeline] { 00:02:20.138 [Pipeline] stage 00:02:20.140 [Pipeline] { (Prologue) 00:02:20.163 [Pipeline] echo 00:02:20.165 Node: VM-host-SM16 00:02:20.173 [Pipeline] cleanWs 00:02:20.183 [WS-CLEANUP] Deleting project workspace... 00:02:20.183 [WS-CLEANUP] Deferred wipeout is used... 00:02:20.189 [WS-CLEANUP] done 00:02:20.428 [Pipeline] setCustomBuildProperty 00:02:20.516 [Pipeline] httpRequest 00:02:20.943 [Pipeline] echo 00:02:20.945 Sorcerer 10.211.164.101 is alive 00:02:20.956 [Pipeline] retry 00:02:20.958 [Pipeline] { 00:02:20.971 [Pipeline] httpRequest 00:02:20.975 HttpMethod: GET 00:02:20.976 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:02:20.976 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:02:20.982 Response Code: HTTP/1.1 200 OK 00:02:20.982 Success: Status code 200 is in the accepted range: 200,404 00:02:20.983 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:02:34.982 [Pipeline] } 00:02:35.000 [Pipeline] // retry 00:02:35.008 [Pipeline] sh 00:02:35.291 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:02:35.308 [Pipeline] httpRequest 00:02:35.744 [Pipeline] echo 00:02:35.746 Sorcerer 10.211.164.101 is alive 00:02:35.755 [Pipeline] retry 00:02:35.757 [Pipeline] { 00:02:35.771 [Pipeline] httpRequest 00:02:35.775 HttpMethod: GET 00:02:35.776 URL: http://10.211.164.101/packages/spdk_ca5713c3836dd279778f1b6ac88aa8a5ae3a7968.tar.gz 00:02:35.776 Sending request to url: http://10.211.164.101/packages/spdk_ca5713c3836dd279778f1b6ac88aa8a5ae3a7968.tar.gz 00:02:35.782 Response Code: HTTP/1.1 200 OK 00:02:35.782 Success: Status code 200 is in the accepted range: 200,404 00:02:35.783 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_ca5713c3836dd279778f1b6ac88aa8a5ae3a7968.tar.gz 00:03:43.006 [Pipeline] } 00:03:43.024 [Pipeline] // retry 00:03:43.032 [Pipeline] sh 00:03:43.312 + tar --no-same-owner -xf spdk_ca5713c3836dd279778f1b6ac88aa8a5ae3a7968.tar.gz 00:03:46.652 [Pipeline] sh 00:03:46.935 + git -C spdk log --oneline -n5 00:03:46.935 ca5713c38 bdev/malloc: Support accel sequence when DIF is enabled 00:03:46.935 18e36da1a bdev/malloc: malloc_done() uses switch-case for clean up 00:03:46.935 481542548 accel: Add spdk_accel_sequence_has_task() to query what sequence does 00:03:46.935 a4d8602f2 nvmf: Add no_metadata option to nvmf_subsystem_add_ns 00:03:46.935 15b283ee8 nvmf: Get metadata config by not bdev but bdev_desc 00:03:46.956 [Pipeline] writeFile 00:03:46.970 [Pipeline] sh 00:03:47.250 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:47.259 [Pipeline] sh 00:03:47.535 + cat autorun-spdk.conf 00:03:47.535 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:47.535 SPDK_TEST_NVME=1 00:03:47.535 SPDK_TEST_FTL=1 00:03:47.535 SPDK_TEST_ISAL=1 00:03:47.535 SPDK_RUN_ASAN=1 00:03:47.535 SPDK_RUN_UBSAN=1 00:03:47.535 SPDK_TEST_XNVME=1 00:03:47.535 SPDK_TEST_NVME_FDP=1 00:03:47.535 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:47.541 RUN_NIGHTLY=0 00:03:47.543 [Pipeline] } 00:03:47.557 [Pipeline] // stage 00:03:47.572 [Pipeline] stage 00:03:47.574 [Pipeline] { (Run VM) 00:03:47.586 [Pipeline] sh 00:03:47.900 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:47.900 + echo 'Start stage prepare_nvme.sh' 00:03:47.900 Start stage prepare_nvme.sh 00:03:47.900 + [[ -n 6 ]] 00:03:47.900 + disk_prefix=ex6 00:03:47.900 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:03:47.900 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:03:47.900 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:03:47.900 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:47.900 ++ SPDK_TEST_NVME=1 00:03:47.900 ++ SPDK_TEST_FTL=1 00:03:47.900 ++ SPDK_TEST_ISAL=1 00:03:47.900 ++ SPDK_RUN_ASAN=1 00:03:47.900 ++ SPDK_RUN_UBSAN=1 00:03:47.900 ++ SPDK_TEST_XNVME=1 00:03:47.900 ++ SPDK_TEST_NVME_FDP=1 00:03:47.900 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:47.900 ++ RUN_NIGHTLY=0 00:03:47.900 + cd /var/jenkins/workspace/nvme-vg-autotest 00:03:47.900 + nvme_files=() 00:03:47.900 + declare -A nvme_files 00:03:47.900 + backend_dir=/var/lib/libvirt/images/backends 00:03:47.900 + nvme_files['nvme.img']=5G 00:03:47.900 + nvme_files['nvme-cmb.img']=5G 00:03:47.900 + nvme_files['nvme-multi0.img']=4G 00:03:47.900 + nvme_files['nvme-multi1.img']=4G 00:03:47.900 + nvme_files['nvme-multi2.img']=4G 00:03:47.900 + nvme_files['nvme-openstack.img']=8G 00:03:47.900 + nvme_files['nvme-zns.img']=5G 00:03:47.900 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:47.900 + (( SPDK_TEST_FTL == 1 )) 00:03:47.900 + nvme_files["nvme-ftl.img"]=6G 00:03:47.900 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:47.900 + nvme_files["nvme-fdp.img"]=1G 00:03:47.900 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:47.900 + for nvme in "${!nvme_files[@]}" 00:03:47.900 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:03:47.900 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:47.900 + for nvme in "${!nvme_files[@]}" 00:03:47.900 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:03:47.900 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:47.900 + for nvme in "${!nvme_files[@]}" 00:03:47.900 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:03:48.836 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:48.836 + for nvme in "${!nvme_files[@]}" 00:03:48.836 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:03:48.836 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:48.836 + for nvme in "${!nvme_files[@]}" 00:03:48.836 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:03:48.836 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:48.836 + for nvme in "${!nvme_files[@]}" 00:03:48.836 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:03:48.836 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:48.836 + for nvme in "${!nvme_files[@]}" 00:03:48.836 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:03:48.836 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:48.836 + for nvme in "${!nvme_files[@]}" 00:03:48.836 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:03:48.836 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:48.836 + for nvme in "${!nvme_files[@]}" 00:03:48.836 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:03:49.771 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:49.771 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:03:49.771 + echo 'End stage prepare_nvme.sh' 00:03:49.771 End stage prepare_nvme.sh 00:03:49.783 [Pipeline] sh 00:03:50.063 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:50.063 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:50.063 00:03:50.063 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:03:50.063 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:03:50.063 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:03:50.063 HELP=0 00:03:50.063 DRY_RUN=0 00:03:50.063 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:03:50.063 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:50.063 NVME_AUTO_CREATE=0 00:03:50.063 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:03:50.063 NVME_CMB=,,,, 00:03:50.063 NVME_PMR=,,,, 00:03:50.063 NVME_ZNS=,,,, 00:03:50.063 NVME_MS=true,,,, 00:03:50.063 NVME_FDP=,,,on, 00:03:50.063 SPDK_VAGRANT_DISTRO=fedora39 00:03:50.063 SPDK_VAGRANT_VMCPU=10 00:03:50.063 SPDK_VAGRANT_VMRAM=12288 00:03:50.063 SPDK_VAGRANT_PROVIDER=libvirt 00:03:50.063 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:50.063 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:50.063 SPDK_OPENSTACK_NETWORK=0 00:03:50.063 VAGRANT_PACKAGE_BOX=0 00:03:50.063 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:50.063 FORCE_DISTRO=true 00:03:50.063 VAGRANT_BOX_VERSION= 00:03:50.063 EXTRA_VAGRANTFILES= 00:03:50.063 NIC_MODEL=e1000 00:03:50.063 00:03:50.063 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:03:50.063 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:03:53.347 Bringing machine 'default' up with 'libvirt' provider... 00:03:53.914 ==> default: Creating image (snapshot of base box volume). 00:03:54.173 ==> default: Creating domain with the following settings... 00:03:54.173 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730878816_0b9faff7490fa6b87498 00:03:54.173 ==> default: -- Domain type: kvm 00:03:54.173 ==> default: -- Cpus: 10 00:03:54.173 ==> default: -- Feature: acpi 00:03:54.173 ==> default: -- Feature: apic 00:03:54.173 ==> default: -- Feature: pae 00:03:54.173 ==> default: -- Memory: 12288M 00:03:54.173 ==> default: -- Memory Backing: hugepages: 00:03:54.173 ==> default: -- Management MAC: 00:03:54.173 ==> default: -- Loader: 00:03:54.173 ==> default: -- Nvram: 00:03:54.173 ==> default: -- Base box: spdk/fedora39 00:03:54.173 ==> default: -- Storage pool: default 00:03:54.173 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730878816_0b9faff7490fa6b87498.img (20G) 00:03:54.173 ==> default: -- Volume Cache: default 00:03:54.173 ==> default: -- Kernel: 00:03:54.173 ==> default: -- Initrd: 00:03:54.173 ==> default: -- Graphics Type: vnc 00:03:54.173 ==> default: -- Graphics Port: -1 00:03:54.173 ==> default: -- Graphics IP: 127.0.0.1 00:03:54.173 ==> default: -- Graphics Password: Not defined 00:03:54.173 ==> default: -- Video Type: cirrus 00:03:54.173 ==> default: -- Video VRAM: 9216 00:03:54.173 ==> default: -- Sound Type: 00:03:54.173 ==> default: -- Keymap: en-us 00:03:54.173 ==> default: -- TPM Path: 00:03:54.173 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:54.173 ==> default: -- Command line args: 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:54.173 ==> default: -> value=-drive, 00:03:54.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:54.173 ==> default: -> value=-drive, 00:03:54.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:54.173 ==> default: -> value=-drive, 00:03:54.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.173 ==> default: -> value=-drive, 00:03:54.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.173 ==> default: -> value=-drive, 00:03:54.173 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:54.173 ==> default: -> value=-device, 00:03:54.173 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:54.173 ==> default: -> value=-drive, 00:03:54.174 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:54.174 ==> default: -> value=-device, 00:03:54.174 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:54.432 ==> default: Creating shared folders metadata... 00:03:54.432 ==> default: Starting domain. 00:03:56.336 ==> default: Waiting for domain to get an IP address... 00:04:14.443 ==> default: Waiting for SSH to become available... 00:04:15.381 ==> default: Configuring and enabling network interfaces... 00:04:20.661 default: SSH address: 192.168.121.2:22 00:04:20.661 default: SSH username: vagrant 00:04:20.661 default: SSH auth method: private key 00:04:23.193 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:31.306 ==> default: Mounting SSHFS shared folder... 00:04:32.678 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:32.678 ==> default: Checking Mount.. 00:04:34.069 ==> default: Folder Successfully Mounted! 00:04:34.069 ==> default: Running provisioner: file... 00:04:34.635 default: ~/.gitconfig => .gitconfig 00:04:35.201 00:04:35.202 SUCCESS! 00:04:35.202 00:04:35.202 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:35.202 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:35.202 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:35.202 00:04:35.210 [Pipeline] } 00:04:35.223 [Pipeline] // stage 00:04:35.231 [Pipeline] dir 00:04:35.232 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:04:35.233 [Pipeline] { 00:04:35.245 [Pipeline] catchError 00:04:35.246 [Pipeline] { 00:04:35.257 [Pipeline] sh 00:04:35.534 + vagrant ssh-config --host vagrant 00:04:35.534 + sed -ne /^Host/,$p 00:04:35.534 + tee ssh_conf 00:04:39.715 Host vagrant 00:04:39.715 HostName 192.168.121.2 00:04:39.715 User vagrant 00:04:39.715 Port 22 00:04:39.715 UserKnownHostsFile /dev/null 00:04:39.715 StrictHostKeyChecking no 00:04:39.715 PasswordAuthentication no 00:04:39.715 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:39.715 IdentitiesOnly yes 00:04:39.715 LogLevel FATAL 00:04:39.715 ForwardAgent yes 00:04:39.715 ForwardX11 yes 00:04:39.715 00:04:39.728 [Pipeline] withEnv 00:04:39.730 [Pipeline] { 00:04:39.742 [Pipeline] sh 00:04:40.021 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:40.021 source /etc/os-release 00:04:40.021 [[ -e /image.version ]] && img=$(< /image.version) 00:04:40.021 # Minimal, systemd-like check. 00:04:40.021 if [[ -e /.dockerenv ]]; then 00:04:40.021 # Clear garbage from the node's name: 00:04:40.021 # agt-er_autotest_547-896 -> autotest_547-896 00:04:40.021 # $HOSTNAME is the actual container id 00:04:40.021 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:40.021 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:40.021 # We can assume this is a mount from a host where container is running, 00:04:40.021 # so fetch its hostname to easily identify the target swarm worker. 00:04:40.021 container="$(< /etc/hostname) ($agent)" 00:04:40.021 else 00:04:40.021 # Fallback 00:04:40.021 container=$agent 00:04:40.021 fi 00:04:40.021 fi 00:04:40.021 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:40.021 00:04:40.290 [Pipeline] } 00:04:40.305 [Pipeline] // withEnv 00:04:40.314 [Pipeline] setCustomBuildProperty 00:04:40.328 [Pipeline] stage 00:04:40.330 [Pipeline] { (Tests) 00:04:40.346 [Pipeline] sh 00:04:40.625 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:40.897 [Pipeline] sh 00:04:41.176 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:41.448 [Pipeline] timeout 00:04:41.448 Timeout set to expire in 50 min 00:04:41.450 [Pipeline] { 00:04:41.462 [Pipeline] sh 00:04:41.742 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:42.308 HEAD is now at ca5713c38 bdev/malloc: Support accel sequence when DIF is enabled 00:04:42.320 [Pipeline] sh 00:04:42.600 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:42.869 [Pipeline] sh 00:04:43.148 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:43.420 [Pipeline] sh 00:04:43.697 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:43.955 ++ readlink -f spdk_repo 00:04:43.955 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:43.955 + [[ -n /home/vagrant/spdk_repo ]] 00:04:43.955 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:43.955 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:43.955 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:43.955 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:43.955 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:43.955 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:43.955 + cd /home/vagrant/spdk_repo 00:04:43.955 + source /etc/os-release 00:04:43.955 ++ NAME='Fedora Linux' 00:04:43.955 ++ VERSION='39 (Cloud Edition)' 00:04:43.955 ++ ID=fedora 00:04:43.955 ++ VERSION_ID=39 00:04:43.955 ++ VERSION_CODENAME= 00:04:43.955 ++ PLATFORM_ID=platform:f39 00:04:43.955 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:43.955 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:43.955 ++ LOGO=fedora-logo-icon 00:04:43.955 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:43.955 ++ HOME_URL=https://fedoraproject.org/ 00:04:43.955 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:43.955 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:43.955 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:43.955 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:43.955 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:43.955 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:43.955 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:43.955 ++ SUPPORT_END=2024-11-12 00:04:43.955 ++ VARIANT='Cloud Edition' 00:04:43.955 ++ VARIANT_ID=cloud 00:04:43.955 + uname -a 00:04:43.955 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:43.955 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:44.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.493 Hugepages 00:04:44.493 node hugesize free / total 00:04:44.493 node0 1048576kB 0 / 0 00:04:44.493 node0 2048kB 0 / 0 00:04:44.493 00:04:44.493 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:44.493 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:44.493 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:44.751 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:44.751 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:44.751 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:44.751 + rm -f /tmp/spdk-ld-path 00:04:44.751 + source autorun-spdk.conf 00:04:44.751 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:44.751 ++ SPDK_TEST_NVME=1 00:04:44.751 ++ SPDK_TEST_FTL=1 00:04:44.751 ++ SPDK_TEST_ISAL=1 00:04:44.751 ++ SPDK_RUN_ASAN=1 00:04:44.751 ++ SPDK_RUN_UBSAN=1 00:04:44.751 ++ SPDK_TEST_XNVME=1 00:04:44.751 ++ SPDK_TEST_NVME_FDP=1 00:04:44.751 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:44.751 ++ RUN_NIGHTLY=0 00:04:44.751 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:44.751 + [[ -n '' ]] 00:04:44.751 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:44.751 + for M in /var/spdk/build-*-manifest.txt 00:04:44.751 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:44.751 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:44.751 + for M in /var/spdk/build-*-manifest.txt 00:04:44.751 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:44.751 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:44.751 + for M in /var/spdk/build-*-manifest.txt 00:04:44.751 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:44.751 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:44.751 ++ uname 00:04:44.751 + [[ Linux == \L\i\n\u\x ]] 00:04:44.751 + sudo dmesg -T 00:04:44.751 + sudo dmesg --clear 00:04:44.751 + dmesg_pid=5404 00:04:44.751 + sudo dmesg -Tw 00:04:44.751 + [[ Fedora Linux == FreeBSD ]] 00:04:44.751 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:44.751 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:44.751 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:44.751 + [[ -x /usr/src/fio-static/fio ]] 00:04:44.751 + export FIO_BIN=/usr/src/fio-static/fio 00:04:44.751 + FIO_BIN=/usr/src/fio-static/fio 00:04:44.751 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:44.751 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:44.751 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:44.751 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:44.751 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:44.751 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:44.751 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:44.751 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:44.751 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:44.751 Test configuration: 00:04:44.751 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:44.752 SPDK_TEST_NVME=1 00:04:44.752 SPDK_TEST_FTL=1 00:04:44.752 SPDK_TEST_ISAL=1 00:04:44.752 SPDK_RUN_ASAN=1 00:04:44.752 SPDK_RUN_UBSAN=1 00:04:44.752 SPDK_TEST_XNVME=1 00:04:44.752 SPDK_TEST_NVME_FDP=1 00:04:44.752 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:45.010 RUN_NIGHTLY=0 07:41:07 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:04:45.010 07:41:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.010 07:41:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:45.010 07:41:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:45.010 07:41:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.010 07:41:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.010 07:41:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.010 07:41:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.010 07:41:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.010 07:41:07 -- paths/export.sh@5 -- $ export PATH 00:04:45.010 07:41:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.010 07:41:07 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:45.010 07:41:07 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:45.010 07:41:07 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730878867.XXXXXX 00:04:45.010 07:41:07 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730878867.7ja88t 00:04:45.010 07:41:07 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:45.010 07:41:07 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:45.010 07:41:07 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:45.010 07:41:07 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:45.010 07:41:07 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:45.010 07:41:07 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:45.010 07:41:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:45.010 07:41:07 -- common/autotest_common.sh@10 -- $ set +x 00:04:45.010 07:41:07 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:45.010 07:41:07 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:45.010 07:41:07 -- pm/common@17 -- $ local monitor 00:04:45.010 07:41:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.010 07:41:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:45.010 07:41:07 -- pm/common@25 -- $ sleep 1 00:04:45.010 07:41:07 -- pm/common@21 -- $ date +%s 00:04:45.010 07:41:07 -- pm/common@21 -- $ date +%s 00:04:45.010 07:41:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730878867 00:04:45.010 07:41:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730878867 00:04:45.010 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730878867_collect-vmstat.pm.log 00:04:45.010 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730878867_collect-cpu-load.pm.log 00:04:45.944 07:41:08 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:45.945 07:41:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:45.945 07:41:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:45.945 07:41:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:45.945 07:41:08 -- spdk/autobuild.sh@16 -- $ date -u 00:04:45.945 Wed Nov 6 07:41:08 AM UTC 2024 00:04:45.945 07:41:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:45.945 v25.01-pre-144-gca5713c38 00:04:45.945 07:41:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:45.945 07:41:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:45.945 07:41:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:45.945 07:41:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:45.945 07:41:08 -- common/autotest_common.sh@10 -- $ set +x 00:04:45.945 ************************************ 00:04:45.945 START TEST asan 00:04:45.945 ************************************ 00:04:45.945 using asan 00:04:45.945 07:41:08 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:04:45.945 00:04:45.945 real 0m0.000s 00:04:45.945 user 0m0.000s 00:04:45.945 sys 0m0.000s 00:04:45.945 07:41:08 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:45.945 ************************************ 00:04:45.945 07:41:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:45.945 END TEST asan 00:04:45.945 ************************************ 00:04:45.945 07:41:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:45.945 07:41:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:45.945 07:41:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:45.945 07:41:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:45.945 07:41:08 -- common/autotest_common.sh@10 -- $ set +x 00:04:45.945 ************************************ 00:04:45.945 START TEST ubsan 00:04:45.945 ************************************ 00:04:45.945 using ubsan 00:04:45.945 07:41:08 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:45.945 00:04:45.945 real 0m0.000s 00:04:45.945 user 0m0.000s 00:04:45.945 sys 0m0.000s 00:04:45.945 07:41:08 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:45.945 07:41:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:45.945 ************************************ 00:04:45.945 END TEST ubsan 00:04:45.945 ************************************ 00:04:45.945 07:41:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:45.945 07:41:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:45.945 07:41:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:45.945 07:41:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:45.945 07:41:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:45.945 07:41:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:45.945 07:41:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:45.945 07:41:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:45.945 07:41:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:46.202 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:46.202 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:46.769 Using 'verbs' RDMA provider 00:05:02.607 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:14.800 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:14.800 Creating mk/config.mk...done. 00:05:14.800 Creating mk/cc.flags.mk...done. 00:05:14.800 Type 'make' to build. 00:05:14.800 07:41:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:14.800 07:41:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:14.800 07:41:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:14.800 07:41:36 -- common/autotest_common.sh@10 -- $ set +x 00:05:14.800 ************************************ 00:05:14.800 START TEST make 00:05:14.800 ************************************ 00:05:14.800 07:41:36 make -- common/autotest_common.sh@1125 -- $ make -j10 00:05:14.800 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:05:14.801 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:05:14.801 meson setup builddir \ 00:05:14.801 -Dwith-libaio=enabled \ 00:05:14.801 -Dwith-liburing=enabled \ 00:05:14.801 -Dwith-libvfn=disabled \ 00:05:14.801 -Dwith-spdk=disabled \ 00:05:14.801 -Dexamples=false \ 00:05:14.801 -Dtests=false \ 00:05:14.801 -Dtools=false && \ 00:05:14.801 meson compile -C builddir && \ 00:05:14.801 cd -) 00:05:14.801 make[1]: Nothing to be done for 'all'. 00:05:16.699 The Meson build system 00:05:16.699 Version: 1.5.0 00:05:16.699 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:05:16.699 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:16.699 Build type: native build 00:05:16.699 Project name: xnvme 00:05:16.699 Project version: 0.7.5 00:05:16.699 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:16.699 C linker for the host machine: cc ld.bfd 2.40-14 00:05:16.699 Host machine cpu family: x86_64 00:05:16.699 Host machine cpu: x86_64 00:05:16.699 Message: host_machine.system: linux 00:05:16.699 Compiler for C supports arguments -Wno-missing-braces: YES 00:05:16.699 Compiler for C supports arguments -Wno-cast-function-type: YES 00:05:16.699 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:16.699 Run-time dependency threads found: YES 00:05:16.699 Has header "setupapi.h" : NO 00:05:16.699 Has header "linux/blkzoned.h" : YES 00:05:16.699 Has header "linux/blkzoned.h" : YES (cached) 00:05:16.699 Has header "libaio.h" : YES 00:05:16.699 Library aio found: YES 00:05:16.699 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:16.699 Run-time dependency liburing found: YES 2.2 00:05:16.699 Dependency libvfn skipped: feature with-libvfn disabled 00:05:16.699 Found CMake: /usr/bin/cmake (3.27.7) 00:05:16.699 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:05:16.699 Subproject spdk : skipped: feature with-spdk disabled 00:05:16.699 Run-time dependency appleframeworks found: NO (tried framework) 00:05:16.699 Run-time dependency appleframeworks found: NO (tried framework) 00:05:16.699 Library rt found: YES 00:05:16.699 Checking for function "clock_gettime" with dependency -lrt: YES 00:05:16.699 Configuring xnvme_config.h using configuration 00:05:16.699 Configuring xnvme.spec using configuration 00:05:16.699 Run-time dependency bash-completion found: YES 2.11 00:05:16.699 Message: Bash-completions: /usr/share/bash-completion/completions 00:05:16.699 Program cp found: YES (/usr/bin/cp) 00:05:16.699 Build targets in project: 3 00:05:16.699 00:05:16.699 xnvme 0.7.5 00:05:16.699 00:05:16.699 Subprojects 00:05:16.699 spdk : NO Feature 'with-spdk' disabled 00:05:16.699 00:05:16.699 User defined options 00:05:16.699 examples : false 00:05:16.699 tests : false 00:05:16.699 tools : false 00:05:16.699 with-libaio : enabled 00:05:16.699 with-liburing: enabled 00:05:16.699 with-libvfn : disabled 00:05:16.699 with-spdk : disabled 00:05:16.699 00:05:16.699 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:17.264 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:05:17.264 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:05:17.264 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:05:17.522 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:05:17.522 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:05:17.522 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:05:17.522 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:05:17.522 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:05:17.522 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:05:17.522 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:05:17.522 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:05:17.522 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:05:17.522 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:05:17.522 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:05:17.522 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:05:17.522 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:05:17.522 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:05:17.522 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:05:17.522 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:05:17.780 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:05:17.780 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:05:17.780 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:05:17.780 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:05:17.780 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:05:17.780 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:05:17.780 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:05:17.780 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:05:17.780 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:05:17.780 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:05:17.780 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:05:17.780 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:05:17.780 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:05:17.780 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:05:17.780 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:05:17.780 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:05:17.780 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:05:17.780 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:05:17.780 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:05:17.780 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:05:17.780 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:05:17.780 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:05:17.780 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:05:17.780 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:05:17.780 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:05:17.780 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:05:17.780 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:05:17.780 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:05:18.038 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:05:18.038 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:05:18.038 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:05:18.038 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:05:18.038 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:05:18.038 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:05:18.038 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:05:18.038 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:05:18.038 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:05:18.038 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:05:18.038 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:05:18.038 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:05:18.038 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:05:18.038 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:05:18.038 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:05:18.038 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:05:18.038 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:05:18.296 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:05:18.296 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:05:18.296 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:05:18.296 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:05:18.296 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:05:18.296 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:05:18.296 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:05:18.296 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:05:18.296 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:05:18.554 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:05:19.124 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:05:19.124 [75/76] Linking static target lib/libxnvme.a 00:05:19.124 [76/76] Linking target lib/libxnvme.so.0.7.5 00:05:19.124 INFO: autodetecting backend as ninja 00:05:19.124 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:19.124 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:29.138 The Meson build system 00:05:29.138 Version: 1.5.0 00:05:29.138 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:29.138 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:29.138 Build type: native build 00:05:29.138 Program cat found: YES (/usr/bin/cat) 00:05:29.138 Project name: DPDK 00:05:29.138 Project version: 24.03.0 00:05:29.138 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:29.138 C linker for the host machine: cc ld.bfd 2.40-14 00:05:29.138 Host machine cpu family: x86_64 00:05:29.138 Host machine cpu: x86_64 00:05:29.138 Message: ## Building in Developer Mode ## 00:05:29.138 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:29.138 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:29.138 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:29.138 Program python3 found: YES (/usr/bin/python3) 00:05:29.138 Program cat found: YES (/usr/bin/cat) 00:05:29.138 Compiler for C supports arguments -march=native: YES 00:05:29.138 Checking for size of "void *" : 8 00:05:29.138 Checking for size of "void *" : 8 (cached) 00:05:29.138 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:29.138 Library m found: YES 00:05:29.138 Library numa found: YES 00:05:29.138 Has header "numaif.h" : YES 00:05:29.138 Library fdt found: NO 00:05:29.138 Library execinfo found: NO 00:05:29.138 Has header "execinfo.h" : YES 00:05:29.138 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:29.138 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:29.138 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:29.138 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:29.138 Run-time dependency openssl found: YES 3.1.1 00:05:29.138 Run-time dependency libpcap found: YES 1.10.4 00:05:29.138 Has header "pcap.h" with dependency libpcap: YES 00:05:29.138 Compiler for C supports arguments -Wcast-qual: YES 00:05:29.138 Compiler for C supports arguments -Wdeprecated: YES 00:05:29.138 Compiler for C supports arguments -Wformat: YES 00:05:29.138 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:29.138 Compiler for C supports arguments -Wformat-security: NO 00:05:29.138 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:29.138 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:29.138 Compiler for C supports arguments -Wnested-externs: YES 00:05:29.138 Compiler for C supports arguments -Wold-style-definition: YES 00:05:29.138 Compiler for C supports arguments -Wpointer-arith: YES 00:05:29.138 Compiler for C supports arguments -Wsign-compare: YES 00:05:29.138 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:29.138 Compiler for C supports arguments -Wundef: YES 00:05:29.138 Compiler for C supports arguments -Wwrite-strings: YES 00:05:29.138 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:29.138 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:29.138 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:29.138 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:29.138 Program objdump found: YES (/usr/bin/objdump) 00:05:29.138 Compiler for C supports arguments -mavx512f: YES 00:05:29.138 Checking if "AVX512 checking" compiles: YES 00:05:29.138 Fetching value of define "__SSE4_2__" : 1 00:05:29.138 Fetching value of define "__AES__" : 1 00:05:29.138 Fetching value of define "__AVX__" : 1 00:05:29.138 Fetching value of define "__AVX2__" : 1 00:05:29.138 Fetching value of define "__AVX512BW__" : (undefined) 00:05:29.138 Fetching value of define "__AVX512CD__" : (undefined) 00:05:29.138 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:29.138 Fetching value of define "__AVX512F__" : (undefined) 00:05:29.138 Fetching value of define "__AVX512VL__" : (undefined) 00:05:29.138 Fetching value of define "__PCLMUL__" : 1 00:05:29.138 Fetching value of define "__RDRND__" : 1 00:05:29.138 Fetching value of define "__RDSEED__" : 1 00:05:29.138 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:29.138 Fetching value of define "__znver1__" : (undefined) 00:05:29.138 Fetching value of define "__znver2__" : (undefined) 00:05:29.138 Fetching value of define "__znver3__" : (undefined) 00:05:29.138 Fetching value of define "__znver4__" : (undefined) 00:05:29.138 Library asan found: YES 00:05:29.138 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:29.138 Message: lib/log: Defining dependency "log" 00:05:29.138 Message: lib/kvargs: Defining dependency "kvargs" 00:05:29.138 Message: lib/telemetry: Defining dependency "telemetry" 00:05:29.138 Library rt found: YES 00:05:29.138 Checking for function "getentropy" : NO 00:05:29.138 Message: lib/eal: Defining dependency "eal" 00:05:29.138 Message: lib/ring: Defining dependency "ring" 00:05:29.138 Message: lib/rcu: Defining dependency "rcu" 00:05:29.138 Message: lib/mempool: Defining dependency "mempool" 00:05:29.138 Message: lib/mbuf: Defining dependency "mbuf" 00:05:29.138 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:29.138 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:29.138 Compiler for C supports arguments -mpclmul: YES 00:05:29.139 Compiler for C supports arguments -maes: YES 00:05:29.139 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:29.139 Compiler for C supports arguments -mavx512bw: YES 00:05:29.139 Compiler for C supports arguments -mavx512dq: YES 00:05:29.139 Compiler for C supports arguments -mavx512vl: YES 00:05:29.139 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:29.139 Compiler for C supports arguments -mavx2: YES 00:05:29.139 Compiler for C supports arguments -mavx: YES 00:05:29.139 Message: lib/net: Defining dependency "net" 00:05:29.139 Message: lib/meter: Defining dependency "meter" 00:05:29.139 Message: lib/ethdev: Defining dependency "ethdev" 00:05:29.139 Message: lib/pci: Defining dependency "pci" 00:05:29.139 Message: lib/cmdline: Defining dependency "cmdline" 00:05:29.139 Message: lib/hash: Defining dependency "hash" 00:05:29.139 Message: lib/timer: Defining dependency "timer" 00:05:29.139 Message: lib/compressdev: Defining dependency "compressdev" 00:05:29.139 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:29.139 Message: lib/dmadev: Defining dependency "dmadev" 00:05:29.139 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:29.139 Message: lib/power: Defining dependency "power" 00:05:29.139 Message: lib/reorder: Defining dependency "reorder" 00:05:29.139 Message: lib/security: Defining dependency "security" 00:05:29.139 Has header "linux/userfaultfd.h" : YES 00:05:29.139 Has header "linux/vduse.h" : YES 00:05:29.139 Message: lib/vhost: Defining dependency "vhost" 00:05:29.139 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:29.139 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:29.139 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:29.139 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:29.139 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:29.139 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:29.139 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:29.139 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:29.139 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:29.139 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:29.139 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:29.139 Configuring doxy-api-html.conf using configuration 00:05:29.139 Configuring doxy-api-man.conf using configuration 00:05:29.139 Program mandb found: YES (/usr/bin/mandb) 00:05:29.139 Program sphinx-build found: NO 00:05:29.139 Configuring rte_build_config.h using configuration 00:05:29.139 Message: 00:05:29.139 ================= 00:05:29.139 Applications Enabled 00:05:29.139 ================= 00:05:29.139 00:05:29.139 apps: 00:05:29.139 00:05:29.139 00:05:29.139 Message: 00:05:29.139 ================= 00:05:29.139 Libraries Enabled 00:05:29.139 ================= 00:05:29.139 00:05:29.139 libs: 00:05:29.139 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:29.139 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:29.139 cryptodev, dmadev, power, reorder, security, vhost, 00:05:29.139 00:05:29.139 Message: 00:05:29.139 =============== 00:05:29.139 Drivers Enabled 00:05:29.139 =============== 00:05:29.139 00:05:29.139 common: 00:05:29.139 00:05:29.139 bus: 00:05:29.139 pci, vdev, 00:05:29.139 mempool: 00:05:29.139 ring, 00:05:29.139 dma: 00:05:29.139 00:05:29.139 net: 00:05:29.139 00:05:29.139 crypto: 00:05:29.139 00:05:29.139 compress: 00:05:29.139 00:05:29.139 vdpa: 00:05:29.139 00:05:29.139 00:05:29.139 Message: 00:05:29.139 ================= 00:05:29.139 Content Skipped 00:05:29.139 ================= 00:05:29.139 00:05:29.139 apps: 00:05:29.139 dumpcap: explicitly disabled via build config 00:05:29.139 graph: explicitly disabled via build config 00:05:29.139 pdump: explicitly disabled via build config 00:05:29.139 proc-info: explicitly disabled via build config 00:05:29.139 test-acl: explicitly disabled via build config 00:05:29.139 test-bbdev: explicitly disabled via build config 00:05:29.139 test-cmdline: explicitly disabled via build config 00:05:29.139 test-compress-perf: explicitly disabled via build config 00:05:29.139 test-crypto-perf: explicitly disabled via build config 00:05:29.139 test-dma-perf: explicitly disabled via build config 00:05:29.139 test-eventdev: explicitly disabled via build config 00:05:29.139 test-fib: explicitly disabled via build config 00:05:29.139 test-flow-perf: explicitly disabled via build config 00:05:29.139 test-gpudev: explicitly disabled via build config 00:05:29.139 test-mldev: explicitly disabled via build config 00:05:29.139 test-pipeline: explicitly disabled via build config 00:05:29.139 test-pmd: explicitly disabled via build config 00:05:29.139 test-regex: explicitly disabled via build config 00:05:29.139 test-sad: explicitly disabled via build config 00:05:29.139 test-security-perf: explicitly disabled via build config 00:05:29.139 00:05:29.139 libs: 00:05:29.139 argparse: explicitly disabled via build config 00:05:29.139 metrics: explicitly disabled via build config 00:05:29.139 acl: explicitly disabled via build config 00:05:29.139 bbdev: explicitly disabled via build config 00:05:29.139 bitratestats: explicitly disabled via build config 00:05:29.139 bpf: explicitly disabled via build config 00:05:29.139 cfgfile: explicitly disabled via build config 00:05:29.139 distributor: explicitly disabled via build config 00:05:29.139 efd: explicitly disabled via build config 00:05:29.139 eventdev: explicitly disabled via build config 00:05:29.139 dispatcher: explicitly disabled via build config 00:05:29.139 gpudev: explicitly disabled via build config 00:05:29.139 gro: explicitly disabled via build config 00:05:29.139 gso: explicitly disabled via build config 00:05:29.139 ip_frag: explicitly disabled via build config 00:05:29.139 jobstats: explicitly disabled via build config 00:05:29.139 latencystats: explicitly disabled via build config 00:05:29.139 lpm: explicitly disabled via build config 00:05:29.139 member: explicitly disabled via build config 00:05:29.139 pcapng: explicitly disabled via build config 00:05:29.139 rawdev: explicitly disabled via build config 00:05:29.139 regexdev: explicitly disabled via build config 00:05:29.139 mldev: explicitly disabled via build config 00:05:29.139 rib: explicitly disabled via build config 00:05:29.139 sched: explicitly disabled via build config 00:05:29.139 stack: explicitly disabled via build config 00:05:29.139 ipsec: explicitly disabled via build config 00:05:29.139 pdcp: explicitly disabled via build config 00:05:29.139 fib: explicitly disabled via build config 00:05:29.139 port: explicitly disabled via build config 00:05:29.139 pdump: explicitly disabled via build config 00:05:29.139 table: explicitly disabled via build config 00:05:29.139 pipeline: explicitly disabled via build config 00:05:29.139 graph: explicitly disabled via build config 00:05:29.139 node: explicitly disabled via build config 00:05:29.139 00:05:29.139 drivers: 00:05:29.139 common/cpt: not in enabled drivers build config 00:05:29.139 common/dpaax: not in enabled drivers build config 00:05:29.139 common/iavf: not in enabled drivers build config 00:05:29.139 common/idpf: not in enabled drivers build config 00:05:29.139 common/ionic: not in enabled drivers build config 00:05:29.139 common/mvep: not in enabled drivers build config 00:05:29.139 common/octeontx: not in enabled drivers build config 00:05:29.139 bus/auxiliary: not in enabled drivers build config 00:05:29.139 bus/cdx: not in enabled drivers build config 00:05:29.139 bus/dpaa: not in enabled drivers build config 00:05:29.139 bus/fslmc: not in enabled drivers build config 00:05:29.139 bus/ifpga: not in enabled drivers build config 00:05:29.139 bus/platform: not in enabled drivers build config 00:05:29.139 bus/uacce: not in enabled drivers build config 00:05:29.139 bus/vmbus: not in enabled drivers build config 00:05:29.139 common/cnxk: not in enabled drivers build config 00:05:29.139 common/mlx5: not in enabled drivers build config 00:05:29.139 common/nfp: not in enabled drivers build config 00:05:29.139 common/nitrox: not in enabled drivers build config 00:05:29.139 common/qat: not in enabled drivers build config 00:05:29.139 common/sfc_efx: not in enabled drivers build config 00:05:29.139 mempool/bucket: not in enabled drivers build config 00:05:29.139 mempool/cnxk: not in enabled drivers build config 00:05:29.139 mempool/dpaa: not in enabled drivers build config 00:05:29.139 mempool/dpaa2: not in enabled drivers build config 00:05:29.139 mempool/octeontx: not in enabled drivers build config 00:05:29.139 mempool/stack: not in enabled drivers build config 00:05:29.139 dma/cnxk: not in enabled drivers build config 00:05:29.139 dma/dpaa: not in enabled drivers build config 00:05:29.139 dma/dpaa2: not in enabled drivers build config 00:05:29.139 dma/hisilicon: not in enabled drivers build config 00:05:29.139 dma/idxd: not in enabled drivers build config 00:05:29.139 dma/ioat: not in enabled drivers build config 00:05:29.139 dma/skeleton: not in enabled drivers build config 00:05:29.139 net/af_packet: not in enabled drivers build config 00:05:29.139 net/af_xdp: not in enabled drivers build config 00:05:29.139 net/ark: not in enabled drivers build config 00:05:29.139 net/atlantic: not in enabled drivers build config 00:05:29.139 net/avp: not in enabled drivers build config 00:05:29.139 net/axgbe: not in enabled drivers build config 00:05:29.139 net/bnx2x: not in enabled drivers build config 00:05:29.139 net/bnxt: not in enabled drivers build config 00:05:29.139 net/bonding: not in enabled drivers build config 00:05:29.139 net/cnxk: not in enabled drivers build config 00:05:29.139 net/cpfl: not in enabled drivers build config 00:05:29.139 net/cxgbe: not in enabled drivers build config 00:05:29.139 net/dpaa: not in enabled drivers build config 00:05:29.139 net/dpaa2: not in enabled drivers build config 00:05:29.139 net/e1000: not in enabled drivers build config 00:05:29.139 net/ena: not in enabled drivers build config 00:05:29.139 net/enetc: not in enabled drivers build config 00:05:29.139 net/enetfec: not in enabled drivers build config 00:05:29.139 net/enic: not in enabled drivers build config 00:05:29.139 net/failsafe: not in enabled drivers build config 00:05:29.139 net/fm10k: not in enabled drivers build config 00:05:29.139 net/gve: not in enabled drivers build config 00:05:29.139 net/hinic: not in enabled drivers build config 00:05:29.139 net/hns3: not in enabled drivers build config 00:05:29.139 net/i40e: not in enabled drivers build config 00:05:29.140 net/iavf: not in enabled drivers build config 00:05:29.140 net/ice: not in enabled drivers build config 00:05:29.140 net/idpf: not in enabled drivers build config 00:05:29.140 net/igc: not in enabled drivers build config 00:05:29.140 net/ionic: not in enabled drivers build config 00:05:29.140 net/ipn3ke: not in enabled drivers build config 00:05:29.140 net/ixgbe: not in enabled drivers build config 00:05:29.140 net/mana: not in enabled drivers build config 00:05:29.140 net/memif: not in enabled drivers build config 00:05:29.140 net/mlx4: not in enabled drivers build config 00:05:29.140 net/mlx5: not in enabled drivers build config 00:05:29.140 net/mvneta: not in enabled drivers build config 00:05:29.140 net/mvpp2: not in enabled drivers build config 00:05:29.140 net/netvsc: not in enabled drivers build config 00:05:29.140 net/nfb: not in enabled drivers build config 00:05:29.140 net/nfp: not in enabled drivers build config 00:05:29.140 net/ngbe: not in enabled drivers build config 00:05:29.140 net/null: not in enabled drivers build config 00:05:29.140 net/octeontx: not in enabled drivers build config 00:05:29.140 net/octeon_ep: not in enabled drivers build config 00:05:29.140 net/pcap: not in enabled drivers build config 00:05:29.140 net/pfe: not in enabled drivers build config 00:05:29.140 net/qede: not in enabled drivers build config 00:05:29.140 net/ring: not in enabled drivers build config 00:05:29.140 net/sfc: not in enabled drivers build config 00:05:29.140 net/softnic: not in enabled drivers build config 00:05:29.140 net/tap: not in enabled drivers build config 00:05:29.140 net/thunderx: not in enabled drivers build config 00:05:29.140 net/txgbe: not in enabled drivers build config 00:05:29.140 net/vdev_netvsc: not in enabled drivers build config 00:05:29.140 net/vhost: not in enabled drivers build config 00:05:29.140 net/virtio: not in enabled drivers build config 00:05:29.140 net/vmxnet3: not in enabled drivers build config 00:05:29.140 raw/*: missing internal dependency, "rawdev" 00:05:29.140 crypto/armv8: not in enabled drivers build config 00:05:29.140 crypto/bcmfs: not in enabled drivers build config 00:05:29.140 crypto/caam_jr: not in enabled drivers build config 00:05:29.140 crypto/ccp: not in enabled drivers build config 00:05:29.140 crypto/cnxk: not in enabled drivers build config 00:05:29.140 crypto/dpaa_sec: not in enabled drivers build config 00:05:29.140 crypto/dpaa2_sec: not in enabled drivers build config 00:05:29.140 crypto/ipsec_mb: not in enabled drivers build config 00:05:29.140 crypto/mlx5: not in enabled drivers build config 00:05:29.140 crypto/mvsam: not in enabled drivers build config 00:05:29.140 crypto/nitrox: not in enabled drivers build config 00:05:29.140 crypto/null: not in enabled drivers build config 00:05:29.140 crypto/octeontx: not in enabled drivers build config 00:05:29.140 crypto/openssl: not in enabled drivers build config 00:05:29.140 crypto/scheduler: not in enabled drivers build config 00:05:29.140 crypto/uadk: not in enabled drivers build config 00:05:29.140 crypto/virtio: not in enabled drivers build config 00:05:29.140 compress/isal: not in enabled drivers build config 00:05:29.140 compress/mlx5: not in enabled drivers build config 00:05:29.140 compress/nitrox: not in enabled drivers build config 00:05:29.140 compress/octeontx: not in enabled drivers build config 00:05:29.140 compress/zlib: not in enabled drivers build config 00:05:29.140 regex/*: missing internal dependency, "regexdev" 00:05:29.140 ml/*: missing internal dependency, "mldev" 00:05:29.140 vdpa/ifc: not in enabled drivers build config 00:05:29.140 vdpa/mlx5: not in enabled drivers build config 00:05:29.140 vdpa/nfp: not in enabled drivers build config 00:05:29.140 vdpa/sfc: not in enabled drivers build config 00:05:29.140 event/*: missing internal dependency, "eventdev" 00:05:29.140 baseband/*: missing internal dependency, "bbdev" 00:05:29.140 gpu/*: missing internal dependency, "gpudev" 00:05:29.140 00:05:29.140 00:05:29.140 Build targets in project: 85 00:05:29.140 00:05:29.140 DPDK 24.03.0 00:05:29.140 00:05:29.140 User defined options 00:05:29.140 buildtype : debug 00:05:29.140 default_library : shared 00:05:29.140 libdir : lib 00:05:29.140 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:29.140 b_sanitize : address 00:05:29.140 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:29.140 c_link_args : 00:05:29.140 cpu_instruction_set: native 00:05:29.140 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:29.140 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:29.140 enable_docs : false 00:05:29.140 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:29.140 enable_kmods : false 00:05:29.140 max_lcores : 128 00:05:29.140 tests : false 00:05:29.140 00:05:29.140 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:29.140 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:29.140 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:29.140 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:29.140 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:29.140 [4/268] Linking static target lib/librte_kvargs.a 00:05:29.140 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:29.140 [6/268] Linking static target lib/librte_log.a 00:05:29.140 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.398 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:29.398 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:29.398 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:29.398 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:29.656 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:29.656 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:29.656 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:29.656 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:29.656 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:29.656 [17/268] Linking static target lib/librte_telemetry.a 00:05:29.656 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:29.914 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:29.914 [20/268] Linking target lib/librte_log.so.24.1 00:05:30.173 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:30.431 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:30.431 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:30.431 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:30.431 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:30.689 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:30.689 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:30.689 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:30.689 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:30.689 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:30.689 [31/268] Linking target lib/librte_telemetry.so.24.1 00:05:30.689 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:30.689 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:30.947 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:30.947 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:31.204 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:31.204 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:31.463 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:31.463 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:31.463 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:31.463 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:31.463 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:31.721 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:31.721 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:31.721 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:31.978 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:31.978 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:31.978 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:32.235 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:32.235 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:32.493 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:32.493 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:32.493 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:32.751 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:32.751 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:32.751 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:32.751 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:33.009 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:33.266 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:33.266 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:33.266 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:33.266 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:33.524 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:33.524 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:33.524 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:33.781 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:33.781 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:33.781 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:34.039 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:34.039 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:34.039 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:34.039 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:34.297 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:34.297 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:34.297 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:34.297 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:34.297 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:34.556 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:34.556 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:34.556 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:34.814 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:34.814 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:34.814 [83/268] Linking static target lib/librte_ring.a 00:05:34.814 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:34.814 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:35.072 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:35.072 [87/268] Linking static target lib/librte_eal.a 00:05:35.072 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:35.330 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:35.330 [90/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.588 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:35.588 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:35.588 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:35.846 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:35.846 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:35.846 [96/268] Linking static target lib/librte_mempool.a 00:05:35.846 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:35.846 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:35.846 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:35.846 [100/268] Linking static target lib/librte_rcu.a 00:05:36.104 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:36.104 [102/268] Linking static target lib/librte_mbuf.a 00:05:36.362 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:36.362 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.362 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:36.362 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:36.362 [107/268] Linking static target lib/librte_meter.a 00:05:36.620 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:36.620 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:36.620 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:36.620 [111/268] Linking static target lib/librte_net.a 00:05:36.878 [112/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.878 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.135 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:37.135 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:37.135 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:37.135 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.135 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.391 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:37.670 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:38.237 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:38.237 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:38.494 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:38.495 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:38.495 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:38.495 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:38.495 [127/268] Linking static target lib/librte_pci.a 00:05:38.495 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:38.752 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:38.752 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:38.752 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:38.752 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:38.752 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:39.009 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:39.009 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.009 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:39.009 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:39.009 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:39.009 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:39.009 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:39.009 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:39.009 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:39.009 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:39.266 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:39.523 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:39.523 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:39.782 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:39.782 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:39.782 [149/268] Linking static target lib/librte_cmdline.a 00:05:39.782 [150/268] Linking static target lib/librte_timer.a 00:05:39.782 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:40.040 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:40.040 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:40.298 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:40.556 [155/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.556 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:40.556 [157/268] Linking static target lib/librte_ethdev.a 00:05:40.556 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:40.556 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:40.556 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:40.556 [161/268] Linking static target lib/librte_hash.a 00:05:40.814 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:40.814 [163/268] Linking static target lib/librte_compressdev.a 00:05:40.814 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:40.814 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:41.381 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:41.381 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:41.381 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:41.381 [169/268] Linking static target lib/librte_dmadev.a 00:05:41.381 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.381 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:41.639 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:41.639 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:41.639 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.898 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.156 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:42.156 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:42.414 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:42.414 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.414 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:42.414 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:42.414 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:42.672 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:42.672 [184/268] Linking static target lib/librte_cryptodev.a 00:05:42.672 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:42.672 [186/268] Linking static target lib/librte_power.a 00:05:43.236 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:43.236 [188/268] Linking static target lib/librte_reorder.a 00:05:43.236 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:43.236 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:43.236 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:43.236 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:43.237 [193/268] Linking static target lib/librte_security.a 00:05:43.803 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:43.803 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:44.060 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.060 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.318 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:44.576 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:44.576 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:44.834 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:44.834 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:45.092 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:45.350 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.350 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:45.350 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:45.608 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:45.608 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:45.608 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:45.865 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:45.865 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:45.865 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:45.865 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:45.865 [214/268] Linking static target drivers/librte_bus_vdev.a 00:05:45.865 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:46.123 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:46.123 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:46.123 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:46.123 [219/268] Linking static target drivers/librte_bus_pci.a 00:05:46.123 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.123 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:46.123 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:46.381 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:46.381 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:46.381 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:46.381 [226/268] Linking static target drivers/librte_mempool_ring.a 00:05:46.639 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.203 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.203 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:47.461 [230/268] Linking target lib/librte_eal.so.24.1 00:05:47.461 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:47.461 [232/268] Linking target lib/librte_meter.so.24.1 00:05:47.461 [233/268] Linking target lib/librte_ring.so.24.1 00:05:47.461 [234/268] Linking target lib/librte_pci.so.24.1 00:05:47.719 [235/268] Linking target lib/librte_timer.so.24.1 00:05:47.719 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:47.719 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:47.719 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:47.719 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:47.719 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:47.719 [241/268] Linking target lib/librte_rcu.so.24.1 00:05:47.719 [242/268] Linking target lib/librte_mempool.so.24.1 00:05:47.719 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:47.719 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:47.978 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:47.978 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:47.978 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:47.978 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:47.978 [249/268] Linking target lib/librte_mbuf.so.24.1 00:05:48.236 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:48.236 [251/268] Linking target lib/librte_net.so.24.1 00:05:48.236 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:48.236 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:48.236 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:48.236 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:48.494 [256/268] Linking target lib/librte_hash.so.24.1 00:05:48.494 [257/268] Linking target lib/librte_cmdline.so.24.1 00:05:48.494 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:48.494 [259/268] Linking target lib/librte_security.so.24.1 00:05:48.494 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:49.061 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.061 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:49.061 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:49.320 [264/268] Linking target lib/librte_power.so.24.1 00:05:52.608 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:52.608 [266/268] Linking static target lib/librte_vhost.a 00:05:54.032 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:54.032 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:54.032 INFO: autodetecting backend as ninja 00:05:54.032 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:20.606 CC lib/ut_mock/mock.o 00:06:20.606 CC lib/ut/ut.o 00:06:20.606 CC lib/log/log.o 00:06:20.606 CC lib/log/log_flags.o 00:06:20.606 CC lib/log/log_deprecated.o 00:06:20.606 LIB libspdk_ut.a 00:06:20.606 LIB libspdk_ut_mock.a 00:06:20.606 SO libspdk_ut.so.2.0 00:06:20.606 SO libspdk_ut_mock.so.6.0 00:06:20.607 SYMLINK libspdk_ut.so 00:06:20.607 LIB libspdk_log.a 00:06:20.607 SYMLINK libspdk_ut_mock.so 00:06:20.607 SO libspdk_log.so.7.1 00:06:20.607 SYMLINK libspdk_log.so 00:06:20.607 CC lib/dma/dma.o 00:06:20.607 CC lib/ioat/ioat.o 00:06:20.607 CC lib/util/base64.o 00:06:20.607 CXX lib/trace_parser/trace.o 00:06:20.607 CC lib/util/bit_array.o 00:06:20.607 CC lib/util/cpuset.o 00:06:20.607 CC lib/util/crc16.o 00:06:20.607 CC lib/util/crc32.o 00:06:20.607 CC lib/util/crc32c.o 00:06:20.607 CC lib/vfio_user/host/vfio_user_pci.o 00:06:20.607 CC lib/util/crc32_ieee.o 00:06:20.607 CC lib/vfio_user/host/vfio_user.o 00:06:20.607 CC lib/util/crc64.o 00:06:20.607 CC lib/util/dif.o 00:06:20.607 CC lib/util/fd.o 00:06:20.607 CC lib/util/fd_group.o 00:06:20.607 LIB libspdk_dma.a 00:06:20.607 CC lib/util/file.o 00:06:20.607 CC lib/util/hexlify.o 00:06:20.607 SO libspdk_dma.so.5.0 00:06:20.607 CC lib/util/iov.o 00:06:20.607 LIB libspdk_vfio_user.a 00:06:20.607 CC lib/util/math.o 00:06:20.607 SO libspdk_vfio_user.so.5.0 00:06:20.607 SYMLINK libspdk_dma.so 00:06:20.607 CC lib/util/net.o 00:06:20.607 LIB libspdk_ioat.a 00:06:20.607 CC lib/util/pipe.o 00:06:20.607 SYMLINK libspdk_vfio_user.so 00:06:20.607 CC lib/util/strerror_tls.o 00:06:20.607 CC lib/util/string.o 00:06:20.607 SO libspdk_ioat.so.7.0 00:06:20.607 SYMLINK libspdk_ioat.so 00:06:20.607 CC lib/util/uuid.o 00:06:20.607 CC lib/util/xor.o 00:06:20.607 CC lib/util/zipf.o 00:06:20.607 CC lib/util/md5.o 00:06:20.607 LIB libspdk_trace_parser.a 00:06:20.607 LIB libspdk_util.a 00:06:20.607 SO libspdk_trace_parser.so.6.0 00:06:20.607 SO libspdk_util.so.10.0 00:06:20.607 SYMLINK libspdk_trace_parser.so 00:06:20.607 SYMLINK libspdk_util.so 00:06:20.607 CC lib/vmd/vmd.o 00:06:20.607 CC lib/vmd/led.o 00:06:20.607 CC lib/env_dpdk/env.o 00:06:20.607 CC lib/json/json_util.o 00:06:20.607 CC lib/json/json_parse.o 00:06:20.607 CC lib/env_dpdk/memory.o 00:06:20.607 CC lib/conf/conf.o 00:06:20.607 CC lib/idxd/idxd.o 00:06:20.607 CC lib/rdma_provider/common.o 00:06:20.607 CC lib/rdma_utils/rdma_utils.o 00:06:20.607 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:20.607 CC lib/idxd/idxd_user.o 00:06:20.607 CC lib/idxd/idxd_kernel.o 00:06:20.607 CC lib/json/json_write.o 00:06:20.607 LIB libspdk_conf.a 00:06:20.607 SO libspdk_conf.so.6.0 00:06:20.607 LIB libspdk_rdma_provider.a 00:06:20.607 SO libspdk_rdma_provider.so.6.0 00:06:20.607 LIB libspdk_rdma_utils.a 00:06:20.607 SO libspdk_rdma_utils.so.1.0 00:06:20.607 SYMLINK libspdk_conf.so 00:06:20.607 CC lib/env_dpdk/pci.o 00:06:20.607 SYMLINK libspdk_rdma_provider.so 00:06:20.607 CC lib/env_dpdk/init.o 00:06:20.607 CC lib/env_dpdk/threads.o 00:06:20.607 CC lib/env_dpdk/pci_ioat.o 00:06:20.607 SYMLINK libspdk_rdma_utils.so 00:06:20.607 CC lib/env_dpdk/pci_virtio.o 00:06:20.607 LIB libspdk_json.a 00:06:20.607 SO libspdk_json.so.6.0 00:06:20.607 LIB libspdk_vmd.a 00:06:20.607 CC lib/env_dpdk/pci_vmd.o 00:06:20.607 SO libspdk_vmd.so.6.0 00:06:20.607 SYMLINK libspdk_json.so 00:06:20.607 CC lib/env_dpdk/pci_idxd.o 00:06:20.607 CC lib/env_dpdk/pci_event.o 00:06:20.607 CC lib/env_dpdk/sigbus_handler.o 00:06:20.607 LIB libspdk_idxd.a 00:06:20.607 SYMLINK libspdk_vmd.so 00:06:20.607 SO libspdk_idxd.so.12.1 00:06:20.607 CC lib/env_dpdk/pci_dpdk.o 00:06:20.607 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:20.607 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:20.607 SYMLINK libspdk_idxd.so 00:06:20.607 CC lib/jsonrpc/jsonrpc_server.o 00:06:20.607 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:20.607 CC lib/jsonrpc/jsonrpc_client.o 00:06:20.607 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:20.607 LIB libspdk_jsonrpc.a 00:06:20.607 SO libspdk_jsonrpc.so.6.0 00:06:20.607 SYMLINK libspdk_jsonrpc.so 00:06:20.865 CC lib/rpc/rpc.o 00:06:21.123 LIB libspdk_env_dpdk.a 00:06:21.123 SO libspdk_env_dpdk.so.15.1 00:06:21.381 LIB libspdk_rpc.a 00:06:21.381 SO libspdk_rpc.so.6.0 00:06:21.381 SYMLINK libspdk_rpc.so 00:06:21.381 SYMLINK libspdk_env_dpdk.so 00:06:21.639 CC lib/notify/notify_rpc.o 00:06:21.639 CC lib/notify/notify.o 00:06:21.639 CC lib/trace/trace_flags.o 00:06:21.639 CC lib/trace/trace.o 00:06:21.639 CC lib/keyring/keyring.o 00:06:21.639 CC lib/keyring/keyring_rpc.o 00:06:21.639 CC lib/trace/trace_rpc.o 00:06:21.897 LIB libspdk_notify.a 00:06:21.897 SO libspdk_notify.so.6.0 00:06:21.897 SYMLINK libspdk_notify.so 00:06:21.897 LIB libspdk_keyring.a 00:06:21.897 LIB libspdk_trace.a 00:06:21.897 SO libspdk_keyring.so.2.0 00:06:22.155 SO libspdk_trace.so.11.0 00:06:22.155 SYMLINK libspdk_keyring.so 00:06:22.155 SYMLINK libspdk_trace.so 00:06:22.414 CC lib/sock/sock.o 00:06:22.414 CC lib/thread/thread.o 00:06:22.414 CC lib/thread/iobuf.o 00:06:22.414 CC lib/sock/sock_rpc.o 00:06:22.987 LIB libspdk_sock.a 00:06:22.987 SO libspdk_sock.so.10.0 00:06:22.987 SYMLINK libspdk_sock.so 00:06:23.245 CC lib/nvme/nvme_ctrlr.o 00:06:23.245 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:23.245 CC lib/nvme/nvme_fabric.o 00:06:23.245 CC lib/nvme/nvme_ns.o 00:06:23.245 CC lib/nvme/nvme_ns_cmd.o 00:06:23.245 CC lib/nvme/nvme_pcie_common.o 00:06:23.245 CC lib/nvme/nvme_pcie.o 00:06:23.245 CC lib/nvme/nvme.o 00:06:23.245 CC lib/nvme/nvme_qpair.o 00:06:24.178 CC lib/nvme/nvme_quirks.o 00:06:24.178 CC lib/nvme/nvme_transport.o 00:06:24.178 CC lib/nvme/nvme_discovery.o 00:06:24.178 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:24.437 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:24.437 CC lib/nvme/nvme_tcp.o 00:06:24.437 LIB libspdk_thread.a 00:06:24.437 CC lib/nvme/nvme_opal.o 00:06:24.437 SO libspdk_thread.so.11.0 00:06:24.696 SYMLINK libspdk_thread.so 00:06:24.696 CC lib/nvme/nvme_io_msg.o 00:06:24.696 CC lib/nvme/nvme_poll_group.o 00:06:24.696 CC lib/nvme/nvme_zns.o 00:06:24.954 CC lib/nvme/nvme_stubs.o 00:06:24.954 CC lib/nvme/nvme_auth.o 00:06:25.212 CC lib/nvme/nvme_cuse.o 00:06:25.212 CC lib/nvme/nvme_rdma.o 00:06:25.470 CC lib/accel/accel.o 00:06:25.470 CC lib/accel/accel_rpc.o 00:06:25.470 CC lib/accel/accel_sw.o 00:06:25.470 CC lib/blob/blobstore.o 00:06:25.470 CC lib/blob/request.o 00:06:26.053 CC lib/init/json_config.o 00:06:26.053 CC lib/virtio/virtio.o 00:06:26.053 CC lib/virtio/virtio_vhost_user.o 00:06:26.053 CC lib/virtio/virtio_vfio_user.o 00:06:26.325 CC lib/init/subsystem.o 00:06:26.325 CC lib/init/subsystem_rpc.o 00:06:26.325 CC lib/init/rpc.o 00:06:26.325 CC lib/blob/zeroes.o 00:06:26.325 CC lib/virtio/virtio_pci.o 00:06:26.325 CC lib/blob/blob_bs_dev.o 00:06:26.583 LIB libspdk_init.a 00:06:26.583 CC lib/fsdev/fsdev.o 00:06:26.583 CC lib/fsdev/fsdev_io.o 00:06:26.583 CC lib/fsdev/fsdev_rpc.o 00:06:26.583 SO libspdk_init.so.6.0 00:06:26.583 SYMLINK libspdk_init.so 00:06:26.841 LIB libspdk_virtio.a 00:06:26.841 LIB libspdk_accel.a 00:06:26.841 SO libspdk_virtio.so.7.0 00:06:26.841 CC lib/event/app.o 00:06:26.841 CC lib/event/log_rpc.o 00:06:26.841 CC lib/event/reactor.o 00:06:26.841 CC lib/event/app_rpc.o 00:06:26.841 SO libspdk_accel.so.16.1 00:06:26.841 SYMLINK libspdk_virtio.so 00:06:26.841 LIB libspdk_nvme.a 00:06:27.099 CC lib/event/scheduler_static.o 00:06:27.099 SYMLINK libspdk_accel.so 00:06:27.099 CC lib/bdev/bdev.o 00:06:27.099 CC lib/bdev/bdev_rpc.o 00:06:27.099 CC lib/bdev/bdev_zone.o 00:06:27.099 CC lib/bdev/part.o 00:06:27.099 SO libspdk_nvme.so.14.1 00:06:27.099 CC lib/bdev/scsi_nvme.o 00:06:27.357 LIB libspdk_fsdev.a 00:06:27.615 SO libspdk_fsdev.so.2.0 00:06:27.615 SYMLINK libspdk_nvme.so 00:06:27.615 LIB libspdk_event.a 00:06:27.615 SO libspdk_event.so.14.0 00:06:27.615 SYMLINK libspdk_fsdev.so 00:06:27.615 SYMLINK libspdk_event.so 00:06:27.873 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:28.806 LIB libspdk_fuse_dispatcher.a 00:06:28.806 SO libspdk_fuse_dispatcher.so.1.0 00:06:28.806 SYMLINK libspdk_fuse_dispatcher.so 00:06:30.211 LIB libspdk_blob.a 00:06:30.211 SO libspdk_blob.so.11.0 00:06:30.211 SYMLINK libspdk_blob.so 00:06:30.468 CC lib/lvol/lvol.o 00:06:30.468 CC lib/blobfs/blobfs.o 00:06:30.468 CC lib/blobfs/tree.o 00:06:31.033 LIB libspdk_bdev.a 00:06:31.292 SO libspdk_bdev.so.17.0 00:06:31.292 SYMLINK libspdk_bdev.so 00:06:31.552 CC lib/nvmf/ctrlr_discovery.o 00:06:31.552 CC lib/nvmf/ctrlr.o 00:06:31.552 CC lib/nvmf/ctrlr_bdev.o 00:06:31.552 CC lib/nvmf/subsystem.o 00:06:31.552 CC lib/ublk/ublk.o 00:06:31.552 CC lib/ftl/ftl_core.o 00:06:31.552 CC lib/nbd/nbd.o 00:06:31.552 CC lib/scsi/dev.o 00:06:31.810 LIB libspdk_blobfs.a 00:06:31.810 SO libspdk_blobfs.so.10.0 00:06:31.810 CC lib/scsi/lun.o 00:06:31.810 LIB libspdk_lvol.a 00:06:31.810 SO libspdk_lvol.so.10.0 00:06:31.810 SYMLINK libspdk_blobfs.so 00:06:32.069 CC lib/scsi/port.o 00:06:32.069 SYMLINK libspdk_lvol.so 00:06:32.069 CC lib/scsi/scsi.o 00:06:32.069 CC lib/ftl/ftl_init.o 00:06:32.069 CC lib/nbd/nbd_rpc.o 00:06:32.069 CC lib/scsi/scsi_bdev.o 00:06:32.069 CC lib/scsi/scsi_pr.o 00:06:32.327 CC lib/scsi/scsi_rpc.o 00:06:32.327 CC lib/scsi/task.o 00:06:32.327 CC lib/ftl/ftl_layout.o 00:06:32.327 LIB libspdk_nbd.a 00:06:32.327 SO libspdk_nbd.so.7.0 00:06:32.327 SYMLINK libspdk_nbd.so 00:06:32.328 CC lib/ublk/ublk_rpc.o 00:06:32.328 CC lib/ftl/ftl_debug.o 00:06:32.588 CC lib/ftl/ftl_io.o 00:06:32.588 CC lib/nvmf/nvmf.o 00:06:32.588 CC lib/ftl/ftl_sb.o 00:06:32.588 CC lib/nvmf/nvmf_rpc.o 00:06:32.588 LIB libspdk_ublk.a 00:06:32.588 SO libspdk_ublk.so.3.0 00:06:32.847 CC lib/nvmf/transport.o 00:06:32.847 CC lib/nvmf/tcp.o 00:06:32.847 SYMLINK libspdk_ublk.so 00:06:32.847 CC lib/ftl/ftl_l2p.o 00:06:32.847 LIB libspdk_scsi.a 00:06:32.847 CC lib/ftl/ftl_l2p_flat.o 00:06:32.847 CC lib/ftl/ftl_nv_cache.o 00:06:32.847 SO libspdk_scsi.so.9.0 00:06:32.847 SYMLINK libspdk_scsi.so 00:06:32.847 CC lib/nvmf/stubs.o 00:06:33.105 CC lib/ftl/ftl_band.o 00:06:33.105 CC lib/ftl/ftl_band_ops.o 00:06:33.364 CC lib/nvmf/mdns_server.o 00:06:33.623 CC lib/ftl/ftl_writer.o 00:06:33.623 CC lib/nvmf/rdma.o 00:06:33.623 CC lib/ftl/ftl_rq.o 00:06:33.623 CC lib/nvmf/auth.o 00:06:33.881 CC lib/ftl/ftl_reloc.o 00:06:33.881 CC lib/ftl/ftl_l2p_cache.o 00:06:33.881 CC lib/iscsi/conn.o 00:06:33.881 CC lib/ftl/ftl_p2l.o 00:06:33.881 CC lib/vhost/vhost.o 00:06:34.140 CC lib/ftl/ftl_p2l_log.o 00:06:34.140 CC lib/ftl/mngt/ftl_mngt.o 00:06:34.140 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:34.399 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:34.399 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:34.658 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:34.658 CC lib/iscsi/init_grp.o 00:06:34.658 CC lib/vhost/vhost_rpc.o 00:06:34.658 CC lib/iscsi/iscsi.o 00:06:34.658 CC lib/iscsi/param.o 00:06:34.658 CC lib/iscsi/portal_grp.o 00:06:34.658 CC lib/vhost/vhost_scsi.o 00:06:34.917 CC lib/iscsi/tgt_node.o 00:06:34.917 CC lib/iscsi/iscsi_subsystem.o 00:06:34.917 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:34.917 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:35.176 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:35.176 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:35.176 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:35.176 CC lib/vhost/vhost_blk.o 00:06:35.176 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:35.435 CC lib/vhost/rte_vhost_user.o 00:06:35.435 CC lib/iscsi/iscsi_rpc.o 00:06:35.435 CC lib/iscsi/task.o 00:06:35.435 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:35.435 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:35.435 CC lib/ftl/utils/ftl_conf.o 00:06:35.694 CC lib/ftl/utils/ftl_md.o 00:06:35.694 CC lib/ftl/utils/ftl_mempool.o 00:06:35.694 CC lib/ftl/utils/ftl_bitmap.o 00:06:35.953 CC lib/ftl/utils/ftl_property.o 00:06:35.953 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:35.953 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:35.953 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:35.953 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:36.212 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:36.212 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:36.212 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:36.212 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:36.212 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:36.212 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:36.470 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:36.470 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:36.470 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:36.470 CC lib/ftl/base/ftl_base_dev.o 00:06:36.470 CC lib/ftl/base/ftl_base_bdev.o 00:06:36.470 CC lib/ftl/ftl_trace.o 00:06:36.470 LIB libspdk_nvmf.a 00:06:36.470 LIB libspdk_vhost.a 00:06:36.729 LIB libspdk_iscsi.a 00:06:36.729 SO libspdk_nvmf.so.20.0 00:06:36.729 SO libspdk_vhost.so.8.0 00:06:36.729 SO libspdk_iscsi.so.8.0 00:06:36.729 LIB libspdk_ftl.a 00:06:36.729 SYMLINK libspdk_vhost.so 00:06:36.988 SYMLINK libspdk_nvmf.so 00:06:36.988 SYMLINK libspdk_iscsi.so 00:06:36.988 SO libspdk_ftl.so.9.0 00:06:37.247 SYMLINK libspdk_ftl.so 00:06:37.842 CC module/env_dpdk/env_dpdk_rpc.o 00:06:37.842 CC module/accel/ioat/accel_ioat.o 00:06:37.842 CC module/accel/dsa/accel_dsa.o 00:06:37.842 CC module/blob/bdev/blob_bdev.o 00:06:37.842 CC module/keyring/file/keyring.o 00:06:37.842 CC module/accel/iaa/accel_iaa.o 00:06:37.842 CC module/fsdev/aio/fsdev_aio.o 00:06:37.842 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:37.842 CC module/sock/posix/posix.o 00:06:37.842 CC module/accel/error/accel_error.o 00:06:37.842 LIB libspdk_env_dpdk_rpc.a 00:06:37.842 SO libspdk_env_dpdk_rpc.so.6.0 00:06:37.842 SYMLINK libspdk_env_dpdk_rpc.so 00:06:37.842 CC module/accel/error/accel_error_rpc.o 00:06:37.842 CC module/keyring/file/keyring_rpc.o 00:06:38.101 CC module/accel/ioat/accel_ioat_rpc.o 00:06:38.101 CC module/accel/iaa/accel_iaa_rpc.o 00:06:38.101 LIB libspdk_scheduler_dynamic.a 00:06:38.101 CC module/accel/dsa/accel_dsa_rpc.o 00:06:38.101 SO libspdk_scheduler_dynamic.so.4.0 00:06:38.101 LIB libspdk_accel_error.a 00:06:38.101 LIB libspdk_keyring_file.a 00:06:38.101 SO libspdk_accel_error.so.2.0 00:06:38.101 SYMLINK libspdk_scheduler_dynamic.so 00:06:38.101 LIB libspdk_blob_bdev.a 00:06:38.101 SO libspdk_keyring_file.so.2.0 00:06:38.101 SO libspdk_blob_bdev.so.11.0 00:06:38.101 LIB libspdk_accel_ioat.a 00:06:38.101 LIB libspdk_accel_dsa.a 00:06:38.101 SYMLINK libspdk_accel_error.so 00:06:38.101 LIB libspdk_accel_iaa.a 00:06:38.101 SYMLINK libspdk_keyring_file.so 00:06:38.101 SO libspdk_accel_ioat.so.6.0 00:06:38.101 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:38.101 CC module/fsdev/aio/linux_aio_mgr.o 00:06:38.101 SO libspdk_accel_iaa.so.3.0 00:06:38.101 SYMLINK libspdk_blob_bdev.so 00:06:38.101 SO libspdk_accel_dsa.so.5.0 00:06:38.360 SYMLINK libspdk_accel_dsa.so 00:06:38.360 SYMLINK libspdk_accel_ioat.so 00:06:38.360 SYMLINK libspdk_accel_iaa.so 00:06:38.360 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:38.360 CC module/keyring/linux/keyring.o 00:06:38.360 CC module/keyring/linux/keyring_rpc.o 00:06:38.619 LIB libspdk_keyring_linux.a 00:06:38.619 CC module/scheduler/gscheduler/gscheduler.o 00:06:38.619 SO libspdk_keyring_linux.so.1.0 00:06:38.619 LIB libspdk_scheduler_dpdk_governor.a 00:06:38.619 CC module/bdev/delay/vbdev_delay.o 00:06:38.619 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:38.619 CC module/bdev/error/vbdev_error.o 00:06:38.619 CC module/blobfs/bdev/blobfs_bdev.o 00:06:38.619 SYMLINK libspdk_keyring_linux.so 00:06:38.619 CC module/bdev/gpt/gpt.o 00:06:38.619 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:38.619 CC module/bdev/gpt/vbdev_gpt.o 00:06:38.619 CC module/bdev/lvol/vbdev_lvol.o 00:06:38.619 LIB libspdk_fsdev_aio.a 00:06:38.619 LIB libspdk_scheduler_gscheduler.a 00:06:38.619 SO libspdk_fsdev_aio.so.1.0 00:06:38.619 SO libspdk_scheduler_gscheduler.so.4.0 00:06:38.619 LIB libspdk_sock_posix.a 00:06:38.878 SO libspdk_sock_posix.so.6.0 00:06:38.878 SYMLINK libspdk_scheduler_gscheduler.so 00:06:38.878 CC module/bdev/malloc/bdev_malloc.o 00:06:38.878 SYMLINK libspdk_fsdev_aio.so 00:06:38.878 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:38.878 SYMLINK libspdk_sock_posix.so 00:06:38.878 CC module/bdev/error/vbdev_error_rpc.o 00:06:38.878 CC module/bdev/null/bdev_null.o 00:06:38.878 CC module/bdev/nvme/bdev_nvme.o 00:06:38.878 LIB libspdk_bdev_gpt.a 00:06:39.137 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:39.137 LIB libspdk_blobfs_bdev.a 00:06:39.137 SO libspdk_bdev_gpt.so.6.0 00:06:39.137 CC module/bdev/passthru/vbdev_passthru.o 00:06:39.137 CC module/bdev/raid/bdev_raid.o 00:06:39.137 SO libspdk_blobfs_bdev.so.6.0 00:06:39.137 SYMLINK libspdk_bdev_gpt.so 00:06:39.137 LIB libspdk_bdev_error.a 00:06:39.137 SYMLINK libspdk_blobfs_bdev.so 00:06:39.137 SO libspdk_bdev_error.so.6.0 00:06:39.137 LIB libspdk_bdev_delay.a 00:06:39.137 SYMLINK libspdk_bdev_error.so 00:06:39.137 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:39.137 SO libspdk_bdev_delay.so.6.0 00:06:39.396 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:39.396 CC module/bdev/split/vbdev_split.o 00:06:39.396 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:39.396 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:39.396 CC module/bdev/null/bdev_null_rpc.o 00:06:39.396 SYMLINK libspdk_bdev_delay.so 00:06:39.396 LIB libspdk_bdev_passthru.a 00:06:39.396 SO libspdk_bdev_passthru.so.6.0 00:06:39.396 LIB libspdk_bdev_malloc.a 00:06:39.396 CC module/bdev/xnvme/bdev_xnvme.o 00:06:39.396 LIB libspdk_bdev_null.a 00:06:39.396 SO libspdk_bdev_malloc.so.6.0 00:06:39.655 SYMLINK libspdk_bdev_passthru.so 00:06:39.655 SO libspdk_bdev_null.so.6.0 00:06:39.655 CC module/bdev/split/vbdev_split_rpc.o 00:06:39.655 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:39.655 CC module/bdev/aio/bdev_aio.o 00:06:39.655 SYMLINK libspdk_bdev_malloc.so 00:06:39.655 SYMLINK libspdk_bdev_null.so 00:06:39.655 CC module/bdev/raid/bdev_raid_rpc.o 00:06:39.655 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:39.655 LIB libspdk_bdev_lvol.a 00:06:39.914 LIB libspdk_bdev_split.a 00:06:39.914 SO libspdk_bdev_lvol.so.6.0 00:06:39.914 SO libspdk_bdev_split.so.6.0 00:06:39.914 LIB libspdk_bdev_xnvme.a 00:06:39.914 CC module/bdev/ftl/bdev_ftl.o 00:06:39.914 SO libspdk_bdev_xnvme.so.3.0 00:06:39.914 SYMLINK libspdk_bdev_lvol.so 00:06:39.914 SYMLINK libspdk_bdev_split.so 00:06:39.914 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:39.914 CC module/bdev/raid/bdev_raid_sb.o 00:06:39.914 LIB libspdk_bdev_zone_block.a 00:06:39.914 SYMLINK libspdk_bdev_xnvme.so 00:06:39.914 CC module/bdev/aio/bdev_aio_rpc.o 00:06:39.914 SO libspdk_bdev_zone_block.so.6.0 00:06:39.914 CC module/bdev/iscsi/bdev_iscsi.o 00:06:39.914 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:40.173 SYMLINK libspdk_bdev_zone_block.so 00:06:40.173 CC module/bdev/raid/raid0.o 00:06:40.173 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:40.173 LIB libspdk_bdev_aio.a 00:06:40.173 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:40.173 SO libspdk_bdev_aio.so.6.0 00:06:40.173 CC module/bdev/raid/raid1.o 00:06:40.173 LIB libspdk_bdev_ftl.a 00:06:40.173 SO libspdk_bdev_ftl.so.6.0 00:06:40.173 SYMLINK libspdk_bdev_aio.so 00:06:40.173 CC module/bdev/raid/concat.o 00:06:40.173 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:40.431 SYMLINK libspdk_bdev_ftl.so 00:06:40.431 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:40.432 CC module/bdev/nvme/nvme_rpc.o 00:06:40.432 CC module/bdev/nvme/bdev_mdns_client.o 00:06:40.432 LIB libspdk_bdev_iscsi.a 00:06:40.432 SO libspdk_bdev_iscsi.so.6.0 00:06:40.432 CC module/bdev/nvme/vbdev_opal.o 00:06:40.432 SYMLINK libspdk_bdev_iscsi.so 00:06:40.432 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:40.432 LIB libspdk_bdev_raid.a 00:06:40.432 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:40.690 SO libspdk_bdev_raid.so.6.0 00:06:40.690 SYMLINK libspdk_bdev_raid.so 00:06:40.690 LIB libspdk_bdev_virtio.a 00:06:40.948 SO libspdk_bdev_virtio.so.6.0 00:06:40.948 SYMLINK libspdk_bdev_virtio.so 00:06:42.851 LIB libspdk_bdev_nvme.a 00:06:42.851 SO libspdk_bdev_nvme.so.7.0 00:06:42.851 SYMLINK libspdk_bdev_nvme.so 00:06:43.428 CC module/event/subsystems/keyring/keyring.o 00:06:43.428 CC module/event/subsystems/iobuf/iobuf.o 00:06:43.428 CC module/event/subsystems/sock/sock.o 00:06:43.428 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:43.428 CC module/event/subsystems/scheduler/scheduler.o 00:06:43.428 CC module/event/subsystems/fsdev/fsdev.o 00:06:43.428 CC module/event/subsystems/vmd/vmd.o 00:06:43.428 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:43.428 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:43.428 LIB libspdk_event_fsdev.a 00:06:43.428 LIB libspdk_event_vhost_blk.a 00:06:43.428 LIB libspdk_event_keyring.a 00:06:43.428 SO libspdk_event_fsdev.so.1.0 00:06:43.428 LIB libspdk_event_iobuf.a 00:06:43.428 LIB libspdk_event_scheduler.a 00:06:43.428 LIB libspdk_event_sock.a 00:06:43.428 SO libspdk_event_keyring.so.1.0 00:06:43.428 SO libspdk_event_vhost_blk.so.3.0 00:06:43.428 SO libspdk_event_scheduler.so.4.0 00:06:43.428 SO libspdk_event_iobuf.so.3.0 00:06:43.428 SO libspdk_event_sock.so.5.0 00:06:43.428 LIB libspdk_event_vmd.a 00:06:43.428 SYMLINK libspdk_event_fsdev.so 00:06:43.428 SYMLINK libspdk_event_vhost_blk.so 00:06:43.428 SO libspdk_event_vmd.so.6.0 00:06:43.428 SYMLINK libspdk_event_keyring.so 00:06:43.428 SYMLINK libspdk_event_iobuf.so 00:06:43.428 SYMLINK libspdk_event_sock.so 00:06:43.428 SYMLINK libspdk_event_scheduler.so 00:06:43.706 SYMLINK libspdk_event_vmd.so 00:06:43.706 CC module/event/subsystems/accel/accel.o 00:06:43.965 LIB libspdk_event_accel.a 00:06:43.965 SO libspdk_event_accel.so.6.0 00:06:43.965 SYMLINK libspdk_event_accel.so 00:06:44.223 CC module/event/subsystems/bdev/bdev.o 00:06:44.481 LIB libspdk_event_bdev.a 00:06:44.481 SO libspdk_event_bdev.so.6.0 00:06:44.739 SYMLINK libspdk_event_bdev.so 00:06:44.739 CC module/event/subsystems/scsi/scsi.o 00:06:44.739 CC module/event/subsystems/nbd/nbd.o 00:06:44.739 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:44.739 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:44.739 CC module/event/subsystems/ublk/ublk.o 00:06:44.998 LIB libspdk_event_scsi.a 00:06:44.998 LIB libspdk_event_nbd.a 00:06:44.998 LIB libspdk_event_ublk.a 00:06:44.998 SO libspdk_event_scsi.so.6.0 00:06:44.998 SO libspdk_event_nbd.so.6.0 00:06:44.998 SO libspdk_event_ublk.so.3.0 00:06:44.998 SYMLINK libspdk_event_nbd.so 00:06:45.256 SYMLINK libspdk_event_scsi.so 00:06:45.256 SYMLINK libspdk_event_ublk.so 00:06:45.256 LIB libspdk_event_nvmf.a 00:06:45.256 SO libspdk_event_nvmf.so.6.0 00:06:45.256 SYMLINK libspdk_event_nvmf.so 00:06:45.256 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:45.256 CC module/event/subsystems/iscsi/iscsi.o 00:06:45.514 LIB libspdk_event_vhost_scsi.a 00:06:45.514 SO libspdk_event_vhost_scsi.so.3.0 00:06:45.514 LIB libspdk_event_iscsi.a 00:06:45.773 SYMLINK libspdk_event_vhost_scsi.so 00:06:45.773 SO libspdk_event_iscsi.so.6.0 00:06:45.773 SYMLINK libspdk_event_iscsi.so 00:06:45.773 SO libspdk.so.6.0 00:06:45.773 SYMLINK libspdk.so 00:06:46.031 CC app/trace_record/trace_record.o 00:06:46.031 CC test/rpc_client/rpc_client_test.o 00:06:46.031 TEST_HEADER include/spdk/accel.h 00:06:46.031 TEST_HEADER include/spdk/accel_module.h 00:06:46.031 TEST_HEADER include/spdk/assert.h 00:06:46.031 CXX app/trace/trace.o 00:06:46.031 TEST_HEADER include/spdk/barrier.h 00:06:46.031 TEST_HEADER include/spdk/base64.h 00:06:46.031 TEST_HEADER include/spdk/bdev.h 00:06:46.031 TEST_HEADER include/spdk/bdev_module.h 00:06:46.293 TEST_HEADER include/spdk/bdev_zone.h 00:06:46.293 TEST_HEADER include/spdk/bit_array.h 00:06:46.293 TEST_HEADER include/spdk/bit_pool.h 00:06:46.293 TEST_HEADER include/spdk/blob_bdev.h 00:06:46.293 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:46.293 TEST_HEADER include/spdk/blobfs.h 00:06:46.293 TEST_HEADER include/spdk/blob.h 00:06:46.293 TEST_HEADER include/spdk/conf.h 00:06:46.293 TEST_HEADER include/spdk/config.h 00:06:46.293 TEST_HEADER include/spdk/cpuset.h 00:06:46.293 TEST_HEADER include/spdk/crc16.h 00:06:46.293 TEST_HEADER include/spdk/crc32.h 00:06:46.293 TEST_HEADER include/spdk/crc64.h 00:06:46.293 TEST_HEADER include/spdk/dif.h 00:06:46.293 TEST_HEADER include/spdk/dma.h 00:06:46.293 TEST_HEADER include/spdk/endian.h 00:06:46.293 TEST_HEADER include/spdk/env_dpdk.h 00:06:46.293 TEST_HEADER include/spdk/env.h 00:06:46.293 TEST_HEADER include/spdk/event.h 00:06:46.293 TEST_HEADER include/spdk/fd_group.h 00:06:46.293 TEST_HEADER include/spdk/fd.h 00:06:46.293 TEST_HEADER include/spdk/file.h 00:06:46.293 TEST_HEADER include/spdk/fsdev.h 00:06:46.293 TEST_HEADER include/spdk/fsdev_module.h 00:06:46.293 TEST_HEADER include/spdk/ftl.h 00:06:46.293 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:46.293 TEST_HEADER include/spdk/gpt_spec.h 00:06:46.293 TEST_HEADER include/spdk/hexlify.h 00:06:46.293 TEST_HEADER include/spdk/histogram_data.h 00:06:46.293 TEST_HEADER include/spdk/idxd.h 00:06:46.293 TEST_HEADER include/spdk/idxd_spec.h 00:06:46.293 TEST_HEADER include/spdk/init.h 00:06:46.293 CC test/thread/poller_perf/poller_perf.o 00:06:46.293 TEST_HEADER include/spdk/ioat.h 00:06:46.293 CC examples/util/zipf/zipf.o 00:06:46.293 TEST_HEADER include/spdk/ioat_spec.h 00:06:46.293 TEST_HEADER include/spdk/iscsi_spec.h 00:06:46.293 TEST_HEADER include/spdk/json.h 00:06:46.293 CC examples/ioat/perf/perf.o 00:06:46.293 TEST_HEADER include/spdk/jsonrpc.h 00:06:46.293 TEST_HEADER include/spdk/keyring.h 00:06:46.293 TEST_HEADER include/spdk/keyring_module.h 00:06:46.293 TEST_HEADER include/spdk/likely.h 00:06:46.293 TEST_HEADER include/spdk/log.h 00:06:46.293 TEST_HEADER include/spdk/lvol.h 00:06:46.293 TEST_HEADER include/spdk/md5.h 00:06:46.293 TEST_HEADER include/spdk/memory.h 00:06:46.293 TEST_HEADER include/spdk/mmio.h 00:06:46.293 TEST_HEADER include/spdk/nbd.h 00:06:46.293 TEST_HEADER include/spdk/net.h 00:06:46.293 TEST_HEADER include/spdk/notify.h 00:06:46.293 TEST_HEADER include/spdk/nvme.h 00:06:46.293 TEST_HEADER include/spdk/nvme_intel.h 00:06:46.293 CC test/dma/test_dma/test_dma.o 00:06:46.293 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:46.293 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:46.293 TEST_HEADER include/spdk/nvme_spec.h 00:06:46.293 CC test/app/bdev_svc/bdev_svc.o 00:06:46.293 TEST_HEADER include/spdk/nvme_zns.h 00:06:46.293 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:46.293 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:46.293 TEST_HEADER include/spdk/nvmf.h 00:06:46.293 TEST_HEADER include/spdk/nvmf_spec.h 00:06:46.293 TEST_HEADER include/spdk/nvmf_transport.h 00:06:46.293 TEST_HEADER include/spdk/opal.h 00:06:46.293 TEST_HEADER include/spdk/opal_spec.h 00:06:46.293 TEST_HEADER include/spdk/pci_ids.h 00:06:46.293 TEST_HEADER include/spdk/pipe.h 00:06:46.293 TEST_HEADER include/spdk/queue.h 00:06:46.293 TEST_HEADER include/spdk/reduce.h 00:06:46.293 TEST_HEADER include/spdk/rpc.h 00:06:46.293 TEST_HEADER include/spdk/scheduler.h 00:06:46.293 TEST_HEADER include/spdk/scsi.h 00:06:46.293 TEST_HEADER include/spdk/scsi_spec.h 00:06:46.293 TEST_HEADER include/spdk/sock.h 00:06:46.293 TEST_HEADER include/spdk/stdinc.h 00:06:46.293 TEST_HEADER include/spdk/string.h 00:06:46.293 TEST_HEADER include/spdk/thread.h 00:06:46.293 CC test/env/mem_callbacks/mem_callbacks.o 00:06:46.293 TEST_HEADER include/spdk/trace.h 00:06:46.293 TEST_HEADER include/spdk/trace_parser.h 00:06:46.560 TEST_HEADER include/spdk/tree.h 00:06:46.560 TEST_HEADER include/spdk/ublk.h 00:06:46.560 LINK rpc_client_test 00:06:46.560 TEST_HEADER include/spdk/util.h 00:06:46.560 TEST_HEADER include/spdk/uuid.h 00:06:46.560 TEST_HEADER include/spdk/version.h 00:06:46.560 LINK poller_perf 00:06:46.560 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:46.560 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:46.560 TEST_HEADER include/spdk/vhost.h 00:06:46.560 TEST_HEADER include/spdk/vmd.h 00:06:46.560 TEST_HEADER include/spdk/xor.h 00:06:46.560 TEST_HEADER include/spdk/zipf.h 00:06:46.560 CXX test/cpp_headers/accel.o 00:06:46.560 LINK zipf 00:06:46.560 LINK spdk_trace_record 00:06:46.560 LINK ioat_perf 00:06:46.560 LINK bdev_svc 00:06:46.560 CXX test/cpp_headers/accel_module.o 00:06:46.560 CXX test/cpp_headers/assert.o 00:06:46.560 LINK spdk_trace 00:06:46.819 CXX test/cpp_headers/barrier.o 00:06:46.819 CXX test/cpp_headers/base64.o 00:06:46.819 CXX test/cpp_headers/bdev.o 00:06:46.819 CC examples/ioat/verify/verify.o 00:06:46.819 CXX test/cpp_headers/bdev_module.o 00:06:46.819 CC test/event/event_perf/event_perf.o 00:06:47.077 LINK test_dma 00:06:47.077 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:47.077 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:47.077 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:47.077 CC app/nvmf_tgt/nvmf_main.o 00:06:47.077 CXX test/cpp_headers/bdev_zone.o 00:06:47.077 LINK mem_callbacks 00:06:47.077 LINK verify 00:06:47.077 LINK event_perf 00:06:47.077 CC test/event/reactor/reactor.o 00:06:47.335 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:47.335 LINK nvmf_tgt 00:06:47.335 CXX test/cpp_headers/bit_array.o 00:06:47.335 LINK reactor 00:06:47.335 CC test/env/vtophys/vtophys.o 00:06:47.335 CC test/event/reactor_perf/reactor_perf.o 00:06:47.335 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:47.592 CXX test/cpp_headers/bit_pool.o 00:06:47.592 LINK reactor_perf 00:06:47.592 LINK vtophys 00:06:47.592 CC test/accel/dif/dif.o 00:06:47.592 LINK nvme_fuzz 00:06:47.592 LINK interrupt_tgt 00:06:47.592 CC app/iscsi_tgt/iscsi_tgt.o 00:06:47.592 CXX test/cpp_headers/blob_bdev.o 00:06:47.850 CC test/blobfs/mkfs/mkfs.o 00:06:47.850 LINK vhost_fuzz 00:06:47.850 CC test/event/app_repeat/app_repeat.o 00:06:47.850 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:47.850 CC test/event/scheduler/scheduler.o 00:06:47.850 CXX test/cpp_headers/blobfs_bdev.o 00:06:47.850 LINK iscsi_tgt 00:06:47.850 LINK mkfs 00:06:48.107 LINK app_repeat 00:06:48.107 LINK env_dpdk_post_init 00:06:48.108 CC examples/thread/thread/thread_ex.o 00:06:48.108 CXX test/cpp_headers/blobfs.o 00:06:48.108 LINK scheduler 00:06:48.365 CC test/lvol/esnap/esnap.o 00:06:48.365 CC test/env/memory/memory_ut.o 00:06:48.365 CC app/spdk_lspci/spdk_lspci.o 00:06:48.365 CXX test/cpp_headers/blob.o 00:06:48.365 LINK thread 00:06:48.365 CC test/nvme/aer/aer.o 00:06:48.365 CXX test/cpp_headers/conf.o 00:06:48.365 CC app/spdk_tgt/spdk_tgt.o 00:06:48.624 LINK spdk_lspci 00:06:48.624 LINK dif 00:06:48.624 CXX test/cpp_headers/config.o 00:06:48.624 CXX test/cpp_headers/cpuset.o 00:06:48.624 CC app/spdk_nvme_perf/perf.o 00:06:48.882 CXX test/cpp_headers/crc16.o 00:06:48.882 CXX test/cpp_headers/crc32.o 00:06:48.882 LINK spdk_tgt 00:06:48.882 LINK aer 00:06:48.882 CC examples/sock/hello_world/hello_sock.o 00:06:48.882 CC test/nvme/reset/reset.o 00:06:49.141 CXX test/cpp_headers/crc64.o 00:06:49.141 CC app/spdk_nvme_identify/identify.o 00:06:49.141 CC app/spdk_nvme_discover/discovery_aer.o 00:06:49.141 CXX test/cpp_headers/dif.o 00:06:49.141 CC test/nvme/sgl/sgl.o 00:06:49.399 LINK reset 00:06:49.399 LINK hello_sock 00:06:49.399 CXX test/cpp_headers/dma.o 00:06:49.399 LINK spdk_nvme_discover 00:06:49.399 LINK iscsi_fuzz 00:06:49.657 LINK sgl 00:06:49.657 CXX test/cpp_headers/endian.o 00:06:49.657 CXX test/cpp_headers/env_dpdk.o 00:06:49.657 CC examples/vmd/lsvmd/lsvmd.o 00:06:49.657 CC test/bdev/bdevio/bdevio.o 00:06:49.916 CXX test/cpp_headers/env.o 00:06:49.916 LINK lsvmd 00:06:49.916 LINK spdk_nvme_perf 00:06:49.916 CC test/nvme/e2edp/nvme_dp.o 00:06:49.916 CC test/app/histogram_perf/histogram_perf.o 00:06:49.916 CC test/app/jsoncat/jsoncat.o 00:06:49.916 LINK memory_ut 00:06:49.916 CXX test/cpp_headers/event.o 00:06:50.174 LINK jsoncat 00:06:50.174 LINK histogram_perf 00:06:50.174 CC examples/vmd/led/led.o 00:06:50.174 CC test/nvme/overhead/overhead.o 00:06:50.174 LINK spdk_nvme_identify 00:06:50.174 LINK bdevio 00:06:50.174 LINK nvme_dp 00:06:50.174 CXX test/cpp_headers/fd_group.o 00:06:50.174 CXX test/cpp_headers/fd.o 00:06:50.433 LINK led 00:06:50.433 CC test/env/pci/pci_ut.o 00:06:50.433 CC test/app/stub/stub.o 00:06:50.433 CXX test/cpp_headers/file.o 00:06:50.433 CC app/spdk_top/spdk_top.o 00:06:50.433 LINK overhead 00:06:50.691 CC app/vhost/vhost.o 00:06:50.691 CC app/spdk_dd/spdk_dd.o 00:06:50.691 LINK stub 00:06:50.691 CC app/fio/nvme/fio_plugin.o 00:06:50.691 CXX test/cpp_headers/fsdev.o 00:06:50.691 CC examples/idxd/perf/perf.o 00:06:50.691 LINK vhost 00:06:50.691 CC test/nvme/err_injection/err_injection.o 00:06:50.950 CXX test/cpp_headers/fsdev_module.o 00:06:50.950 LINK pci_ut 00:06:50.950 CC app/fio/bdev/fio_plugin.o 00:06:50.950 CXX test/cpp_headers/ftl.o 00:06:50.950 CXX test/cpp_headers/fuse_dispatcher.o 00:06:50.950 LINK err_injection 00:06:50.950 LINK spdk_dd 00:06:51.209 LINK idxd_perf 00:06:51.210 CXX test/cpp_headers/gpt_spec.o 00:06:51.210 CXX test/cpp_headers/hexlify.o 00:06:51.210 CXX test/cpp_headers/histogram_data.o 00:06:51.210 CXX test/cpp_headers/idxd.o 00:06:51.210 CC test/nvme/startup/startup.o 00:06:51.468 LINK spdk_nvme 00:06:51.468 CXX test/cpp_headers/idxd_spec.o 00:06:51.468 CXX test/cpp_headers/init.o 00:06:51.468 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:51.468 CXX test/cpp_headers/ioat.o 00:06:51.468 CXX test/cpp_headers/ioat_spec.o 00:06:51.468 LINK startup 00:06:51.468 CXX test/cpp_headers/iscsi_spec.o 00:06:51.726 LINK spdk_bdev 00:06:51.726 CC examples/accel/perf/accel_perf.o 00:06:51.726 LINK spdk_top 00:06:51.726 CXX test/cpp_headers/json.o 00:06:51.726 CC test/nvme/reserve/reserve.o 00:06:51.726 CXX test/cpp_headers/jsonrpc.o 00:06:51.726 CC test/nvme/simple_copy/simple_copy.o 00:06:51.726 LINK hello_fsdev 00:06:51.726 CXX test/cpp_headers/keyring.o 00:06:51.984 LINK reserve 00:06:51.984 CC examples/blob/hello_world/hello_blob.o 00:06:51.984 CC examples/nvme/hello_world/hello_world.o 00:06:51.984 CC examples/blob/cli/blobcli.o 00:06:51.984 CXX test/cpp_headers/keyring_module.o 00:06:51.984 CC examples/nvme/reconnect/reconnect.o 00:06:51.984 LINK simple_copy 00:06:52.243 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:52.243 CC test/nvme/connect_stress/connect_stress.o 00:06:52.243 CXX test/cpp_headers/likely.o 00:06:52.243 LINK hello_blob 00:06:52.243 LINK hello_world 00:06:52.243 LINK accel_perf 00:06:52.243 CC test/nvme/boot_partition/boot_partition.o 00:06:52.501 CXX test/cpp_headers/log.o 00:06:52.502 LINK connect_stress 00:06:52.502 CXX test/cpp_headers/lvol.o 00:06:52.502 CXX test/cpp_headers/md5.o 00:06:52.502 LINK reconnect 00:06:52.502 CXX test/cpp_headers/memory.o 00:06:52.502 LINK boot_partition 00:06:52.502 LINK blobcli 00:06:52.502 CXX test/cpp_headers/mmio.o 00:06:52.760 CXX test/cpp_headers/nbd.o 00:06:52.760 CC examples/nvme/arbitration/arbitration.o 00:06:52.760 CC examples/nvme/hotplug/hotplug.o 00:06:52.760 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:52.760 LINK nvme_manage 00:06:52.760 CXX test/cpp_headers/net.o 00:06:52.760 CC test/nvme/compliance/nvme_compliance.o 00:06:52.760 CC test/nvme/fused_ordering/fused_ordering.o 00:06:52.760 CXX test/cpp_headers/notify.o 00:06:53.018 CC examples/bdev/hello_world/hello_bdev.o 00:06:53.018 LINK cmb_copy 00:06:53.018 LINK hotplug 00:06:53.018 CC examples/nvme/abort/abort.o 00:06:53.018 CXX test/cpp_headers/nvme.o 00:06:53.018 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:53.018 LINK fused_ordering 00:06:53.276 LINK arbitration 00:06:53.276 LINK hello_bdev 00:06:53.276 LINK nvme_compliance 00:06:53.276 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:53.276 LINK pmr_persistence 00:06:53.276 CXX test/cpp_headers/nvme_intel.o 00:06:53.276 CC test/nvme/fdp/fdp.o 00:06:53.534 CC test/nvme/cuse/cuse.o 00:06:53.534 CXX test/cpp_headers/nvme_ocssd.o 00:06:53.534 LINK doorbell_aers 00:06:53.534 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:53.534 CXX test/cpp_headers/nvme_spec.o 00:06:53.534 CXX test/cpp_headers/nvme_zns.o 00:06:53.534 LINK abort 00:06:53.534 CC examples/bdev/bdevperf/bdevperf.o 00:06:53.792 CXX test/cpp_headers/nvmf_cmd.o 00:06:53.792 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:53.792 CXX test/cpp_headers/nvmf.o 00:06:53.792 CXX test/cpp_headers/nvmf_spec.o 00:06:53.792 CXX test/cpp_headers/nvmf_transport.o 00:06:53.792 CXX test/cpp_headers/opal.o 00:06:53.792 LINK fdp 00:06:53.792 CXX test/cpp_headers/opal_spec.o 00:06:54.087 CXX test/cpp_headers/pci_ids.o 00:06:54.087 CXX test/cpp_headers/pipe.o 00:06:54.087 CXX test/cpp_headers/queue.o 00:06:54.087 CXX test/cpp_headers/reduce.o 00:06:54.087 CXX test/cpp_headers/rpc.o 00:06:54.087 CXX test/cpp_headers/scheduler.o 00:06:54.087 CXX test/cpp_headers/scsi.o 00:06:54.087 CXX test/cpp_headers/scsi_spec.o 00:06:54.087 CXX test/cpp_headers/sock.o 00:06:54.087 CXX test/cpp_headers/stdinc.o 00:06:54.346 CXX test/cpp_headers/string.o 00:06:54.346 CXX test/cpp_headers/thread.o 00:06:54.346 CXX test/cpp_headers/trace.o 00:06:54.346 CXX test/cpp_headers/trace_parser.o 00:06:54.346 CXX test/cpp_headers/tree.o 00:06:54.346 CXX test/cpp_headers/ublk.o 00:06:54.346 CXX test/cpp_headers/util.o 00:06:54.346 CXX test/cpp_headers/uuid.o 00:06:54.346 CXX test/cpp_headers/version.o 00:06:54.346 CXX test/cpp_headers/vfio_user_pci.o 00:06:54.604 CXX test/cpp_headers/vfio_user_spec.o 00:06:54.604 CXX test/cpp_headers/vhost.o 00:06:54.604 CXX test/cpp_headers/vmd.o 00:06:54.604 CXX test/cpp_headers/xor.o 00:06:54.604 CXX test/cpp_headers/zipf.o 00:06:54.863 LINK bdevperf 00:06:55.429 CC examples/nvmf/nvmf/nvmf.o 00:06:55.429 LINK cuse 00:06:55.687 LINK esnap 00:06:55.687 LINK nvmf 00:06:56.253 00:06:56.253 real 1m42.560s 00:06:56.253 user 9m44.289s 00:06:56.253 sys 1m48.845s 00:06:56.253 07:43:18 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:56.253 07:43:18 make -- common/autotest_common.sh@10 -- $ set +x 00:06:56.253 ************************************ 00:06:56.253 END TEST make 00:06:56.253 ************************************ 00:06:56.253 07:43:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:56.253 07:43:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:56.253 07:43:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:56.253 07:43:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.253 07:43:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:56.253 07:43:18 -- pm/common@44 -- $ pid=5435 00:06:56.253 07:43:18 -- pm/common@50 -- $ kill -TERM 5435 00:06:56.253 07:43:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.253 07:43:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:56.253 07:43:18 -- pm/common@44 -- $ pid=5436 00:06:56.253 07:43:18 -- pm/common@50 -- $ kill -TERM 5436 00:06:56.511 07:43:18 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:56.511 07:43:18 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:56.511 07:43:18 -- common/autotest_common.sh@1689 -- # lcov --version 00:06:56.511 07:43:19 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:56.511 07:43:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.511 07:43:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.511 07:43:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.511 07:43:19 -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.511 07:43:19 -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.511 07:43:19 -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.511 07:43:19 -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.511 07:43:19 -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.511 07:43:19 -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.511 07:43:19 -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.511 07:43:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.511 07:43:19 -- scripts/common.sh@344 -- # case "$op" in 00:06:56.511 07:43:19 -- scripts/common.sh@345 -- # : 1 00:06:56.511 07:43:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.511 07:43:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.511 07:43:19 -- scripts/common.sh@365 -- # decimal 1 00:06:56.511 07:43:19 -- scripts/common.sh@353 -- # local d=1 00:06:56.511 07:43:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.511 07:43:19 -- scripts/common.sh@355 -- # echo 1 00:06:56.511 07:43:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.511 07:43:19 -- scripts/common.sh@366 -- # decimal 2 00:06:56.511 07:43:19 -- scripts/common.sh@353 -- # local d=2 00:06:56.511 07:43:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.511 07:43:19 -- scripts/common.sh@355 -- # echo 2 00:06:56.511 07:43:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.511 07:43:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.511 07:43:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.511 07:43:19 -- scripts/common.sh@368 -- # return 0 00:06:56.511 07:43:19 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.511 07:43:19 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:56.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.511 --rc genhtml_branch_coverage=1 00:06:56.511 --rc genhtml_function_coverage=1 00:06:56.511 --rc genhtml_legend=1 00:06:56.511 --rc geninfo_all_blocks=1 00:06:56.511 --rc geninfo_unexecuted_blocks=1 00:06:56.511 00:06:56.511 ' 00:06:56.511 07:43:19 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:56.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.511 --rc genhtml_branch_coverage=1 00:06:56.511 --rc genhtml_function_coverage=1 00:06:56.511 --rc genhtml_legend=1 00:06:56.511 --rc geninfo_all_blocks=1 00:06:56.511 --rc geninfo_unexecuted_blocks=1 00:06:56.511 00:06:56.511 ' 00:06:56.511 07:43:19 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:56.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.511 --rc genhtml_branch_coverage=1 00:06:56.511 --rc genhtml_function_coverage=1 00:06:56.511 --rc genhtml_legend=1 00:06:56.511 --rc geninfo_all_blocks=1 00:06:56.511 --rc geninfo_unexecuted_blocks=1 00:06:56.511 00:06:56.511 ' 00:06:56.511 07:43:19 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:56.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.511 --rc genhtml_branch_coverage=1 00:06:56.511 --rc genhtml_function_coverage=1 00:06:56.511 --rc genhtml_legend=1 00:06:56.511 --rc geninfo_all_blocks=1 00:06:56.511 --rc geninfo_unexecuted_blocks=1 00:06:56.511 00:06:56.511 ' 00:06:56.511 07:43:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:56.511 07:43:19 -- nvmf/common.sh@7 -- # uname -s 00:06:56.511 07:43:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.511 07:43:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.511 07:43:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.511 07:43:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.511 07:43:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.511 07:43:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.511 07:43:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.511 07:43:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.511 07:43:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.511 07:43:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.511 07:43:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ac525bc-2596-4ce9-9d20-0a718625d8cf 00:06:56.511 07:43:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=8ac525bc-2596-4ce9-9d20-0a718625d8cf 00:06:56.511 07:43:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.511 07:43:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.511 07:43:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:56.511 07:43:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.512 07:43:19 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.512 07:43:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.512 07:43:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.512 07:43:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.512 07:43:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.512 07:43:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.512 07:43:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.512 07:43:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.512 07:43:19 -- paths/export.sh@5 -- # export PATH 00:06:56.512 07:43:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.512 07:43:19 -- nvmf/common.sh@51 -- # : 0 00:06:56.512 07:43:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:56.512 07:43:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:56.512 07:43:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.512 07:43:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.512 07:43:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.512 07:43:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:56.512 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:56.512 07:43:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:56.512 07:43:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:56.512 07:43:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:56.512 07:43:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:56.512 07:43:19 -- spdk/autotest.sh@32 -- # uname -s 00:06:56.512 07:43:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:56.512 07:43:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:56.512 07:43:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:56.512 07:43:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:56.512 07:43:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:56.512 07:43:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:56.512 07:43:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:56.512 07:43:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:56.512 07:43:19 -- spdk/autotest.sh@48 -- # udevadm_pid=55067 00:06:56.512 07:43:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:56.512 07:43:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:56.512 07:43:19 -- pm/common@17 -- # local monitor 00:06:56.512 07:43:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.512 07:43:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.512 07:43:19 -- pm/common@25 -- # sleep 1 00:06:56.512 07:43:19 -- pm/common@21 -- # date +%s 00:06:56.512 07:43:19 -- pm/common@21 -- # date +%s 00:06:56.512 07:43:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730878999 00:06:56.770 07:43:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730878999 00:06:56.770 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730878999_collect-cpu-load.pm.log 00:06:56.770 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730878999_collect-vmstat.pm.log 00:06:57.704 07:43:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:57.704 07:43:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:57.704 07:43:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:57.704 07:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:57.704 07:43:20 -- spdk/autotest.sh@59 -- # create_test_list 00:06:57.704 07:43:20 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:57.704 07:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:57.704 07:43:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:57.704 07:43:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:57.704 07:43:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:57.704 07:43:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:57.704 07:43:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:57.704 07:43:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:57.704 07:43:20 -- common/autotest_common.sh@1453 -- # uname 00:06:57.704 07:43:20 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:06:57.704 07:43:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:57.704 07:43:20 -- common/autotest_common.sh@1473 -- # uname 00:06:57.704 07:43:20 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:06:57.704 07:43:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:57.704 07:43:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:57.704 lcov: LCOV version 1.15 00:06:57.704 07:43:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:15.809 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:15.809 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:33.888 07:43:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:33.888 07:43:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.888 07:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:33.888 07:43:53 -- spdk/autotest.sh@78 -- # rm -f 00:07:33.888 07:43:53 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:33.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:33.888 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:33.888 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:33.888 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:33.888 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:33.888 07:43:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:33.888 07:43:54 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:07:33.888 07:43:54 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:07:33.888 07:43:54 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:07:33.888 07:43:54 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:33.888 07:43:54 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:33.888 07:43:54 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:33.888 07:43:54 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2c2n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1646 -- # local device=nvme2c2n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:33.888 07:43:54 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:33.888 07:43:54 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:07:33.888 07:43:54 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:33.888 07:43:54 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n2 00:07:33.888 07:43:54 -- common/autotest_common.sh@1646 -- # local device=nvme3n2 00:07:33.888 07:43:54 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:33.888 07:43:54 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n3 00:07:33.888 07:43:54 -- common/autotest_common.sh@1646 -- # local device=nvme3n3 00:07:33.888 07:43:54 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:07:33.888 07:43:54 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:33.888 07:43:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:33.889 07:43:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:33.889 07:43:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:33.889 07:43:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:33.889 07:43:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:33.889 07:43:54 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:33.889 No valid GPT data, bailing 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # pt= 00:07:33.889 07:43:55 -- scripts/common.sh@395 -- # return 1 00:07:33.889 07:43:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:33.889 1+0 records in 00:07:33.889 1+0 records out 00:07:33.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129774 s, 80.8 MB/s 00:07:33.889 07:43:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:33.889 07:43:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:33.889 07:43:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:33.889 07:43:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:33.889 07:43:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:33.889 No valid GPT data, bailing 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # pt= 00:07:33.889 07:43:55 -- scripts/common.sh@395 -- # return 1 00:07:33.889 07:43:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:33.889 1+0 records in 00:07:33.889 1+0 records out 00:07:33.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475591 s, 220 MB/s 00:07:33.889 07:43:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:33.889 07:43:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:33.889 07:43:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:33.889 07:43:55 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:33.889 07:43:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:33.889 No valid GPT data, bailing 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # pt= 00:07:33.889 07:43:55 -- scripts/common.sh@395 -- # return 1 00:07:33.889 07:43:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:33.889 1+0 records in 00:07:33.889 1+0 records out 00:07:33.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493909 s, 212 MB/s 00:07:33.889 07:43:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:33.889 07:43:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:33.889 07:43:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:33.889 07:43:55 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:33.889 07:43:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:33.889 No valid GPT data, bailing 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # pt= 00:07:33.889 07:43:55 -- scripts/common.sh@395 -- # return 1 00:07:33.889 07:43:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:33.889 1+0 records in 00:07:33.889 1+0 records out 00:07:33.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00373014 s, 281 MB/s 00:07:33.889 07:43:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:33.889 07:43:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:33.889 07:43:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:07:33.889 07:43:55 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:07:33.889 07:43:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:07:33.889 No valid GPT data, bailing 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # pt= 00:07:33.889 07:43:55 -- scripts/common.sh@395 -- # return 1 00:07:33.889 07:43:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:07:33.889 1+0 records in 00:07:33.889 1+0 records out 00:07:33.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404153 s, 259 MB/s 00:07:33.889 07:43:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:33.889 07:43:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:33.889 07:43:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:07:33.889 07:43:55 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:07:33.889 07:43:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:07:33.889 No valid GPT data, bailing 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:07:33.889 07:43:55 -- scripts/common.sh@394 -- # pt= 00:07:33.889 07:43:55 -- scripts/common.sh@395 -- # return 1 00:07:33.889 07:43:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:07:33.889 1+0 records in 00:07:33.889 1+0 records out 00:07:33.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462612 s, 227 MB/s 00:07:33.889 07:43:55 -- spdk/autotest.sh@105 -- # sync 00:07:33.889 07:43:55 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:33.889 07:43:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:33.889 07:43:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:35.275 07:43:57 -- spdk/autotest.sh@111 -- # uname -s 00:07:35.275 07:43:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:35.275 07:43:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:35.275 07:43:57 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:35.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.128 Hugepages 00:07:36.128 node hugesize free / total 00:07:36.128 node0 1048576kB 0 / 0 00:07:36.128 node0 2048kB 0 / 0 00:07:36.128 00:07:36.128 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:36.128 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:36.128 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:36.386 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:36.386 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:07:36.386 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:07:36.386 07:43:58 -- spdk/autotest.sh@117 -- # uname -s 00:07:36.386 07:43:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:36.386 07:43:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:36.386 07:43:58 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.954 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.532 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.532 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.532 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.790 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.790 07:44:00 -- common/autotest_common.sh@1513 -- # sleep 1 00:07:38.725 07:44:01 -- common/autotest_common.sh@1514 -- # bdfs=() 00:07:38.725 07:44:01 -- common/autotest_common.sh@1514 -- # local bdfs 00:07:38.725 07:44:01 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:07:38.725 07:44:01 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:07:38.725 07:44:01 -- common/autotest_common.sh@1494 -- # bdfs=() 00:07:38.725 07:44:01 -- common/autotest_common.sh@1494 -- # local bdfs 00:07:38.725 07:44:01 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:38.725 07:44:01 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:38.725 07:44:01 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:07:38.725 07:44:01 -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:07:38.725 07:44:01 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:38.725 07:44:01 -- common/autotest_common.sh@1518 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:39.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.291 Waiting for block devices as requested 00:07:39.291 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.549 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.549 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.549 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.819 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:44.819 07:44:07 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:07:44.819 07:44:07 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # grep 0000:00:10.0/nvme/nvme 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:44.819 07:44:07 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:44.819 07:44:07 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme1 00:07:44.819 07:44:07 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme1 00:07:44.819 07:44:07 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme1 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme1 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # grep oacs 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:07:44.819 07:44:07 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:07:44.819 07:44:07 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme1 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:07:44.819 07:44:07 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1539 -- # continue 00:07:44.819 07:44:07 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:07:44.819 07:44:07 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # grep 0000:00:11.0/nvme/nvme 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # grep oacs 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:07:44.819 07:44:07 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:07:44.819 07:44:07 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:07:44.819 07:44:07 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1539 -- # continue 00:07:44.819 07:44:07 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:07:44.819 07:44:07 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # grep 0000:00:12.0/nvme/nvme 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme2 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # grep oacs 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:07:44.819 07:44:07 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:07:44.819 07:44:07 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:07:44.819 07:44:07 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1539 -- # continue 00:07:44.819 07:44:07 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:07:44.819 07:44:07 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # grep 0000:00:13.0/nvme/nvme 00:07:44.819 07:44:07 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme3 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # grep oacs 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:07:44.819 07:44:07 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:07:44.819 07:44:07 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme3 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:07:44.819 07:44:07 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:07:44.819 07:44:07 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:07:44.819 07:44:07 -- common/autotest_common.sh@1539 -- # continue 00:07:44.819 07:44:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:44.819 07:44:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.819 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:07:44.819 07:44:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:44.819 07:44:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.819 07:44:07 -- common/autotest_common.sh@10 -- # set +x 00:07:44.819 07:44:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:45.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.955 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.955 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.955 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.955 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:46.213 07:44:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:46.213 07:44:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:46.213 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:07:46.213 07:44:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:46.213 07:44:08 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:07:46.213 07:44:08 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:07:46.213 07:44:08 -- common/autotest_common.sh@1559 -- # bdfs=() 00:07:46.213 07:44:08 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:07:46.213 07:44:08 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:07:46.213 07:44:08 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:07:46.213 07:44:08 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:07:46.213 07:44:08 -- common/autotest_common.sh@1494 -- # bdfs=() 00:07:46.213 07:44:08 -- common/autotest_common.sh@1494 -- # local bdfs 00:07:46.213 07:44:08 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:46.213 07:44:08 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.213 07:44:08 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:07:46.213 07:44:08 -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:07:46.213 07:44:08 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:46.213 07:44:08 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:07:46.213 07:44:08 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:46.213 07:44:08 -- common/autotest_common.sh@1562 -- # device=0x0010 00:07:46.213 07:44:08 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:46.213 07:44:08 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:07:46.214 07:44:08 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:46.214 07:44:08 -- common/autotest_common.sh@1562 -- # device=0x0010 00:07:46.214 07:44:08 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:46.214 07:44:08 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:07:46.214 07:44:08 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:46.214 07:44:08 -- common/autotest_common.sh@1562 -- # device=0x0010 00:07:46.214 07:44:08 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:46.214 07:44:08 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:07:46.214 07:44:08 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:46.214 07:44:08 -- common/autotest_common.sh@1562 -- # device=0x0010 00:07:46.214 07:44:08 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:46.214 07:44:08 -- common/autotest_common.sh@1568 -- # (( 0 > 0 )) 00:07:46.214 07:44:08 -- common/autotest_common.sh@1568 -- # return 0 00:07:46.214 07:44:08 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:07:46.214 07:44:08 -- common/autotest_common.sh@1576 -- # return 0 00:07:46.214 07:44:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:46.214 07:44:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:46.214 07:44:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:46.214 07:44:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:46.214 07:44:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:46.214 07:44:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:46.214 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:07:46.214 07:44:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:46.214 07:44:08 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:46.214 07:44:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.214 07:44:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.214 07:44:08 -- common/autotest_common.sh@10 -- # set +x 00:07:46.214 ************************************ 00:07:46.214 START TEST env 00:07:46.214 ************************************ 00:07:46.214 07:44:08 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:46.214 * Looking for test storage... 00:07:46.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:46.472 07:44:08 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:46.472 07:44:08 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:46.472 07:44:08 env -- common/autotest_common.sh@1689 -- # lcov --version 00:07:46.472 07:44:08 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:46.472 07:44:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.472 07:44:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.472 07:44:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.472 07:44:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.472 07:44:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.472 07:44:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.472 07:44:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.472 07:44:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.472 07:44:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.472 07:44:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.472 07:44:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.473 07:44:08 env -- scripts/common.sh@344 -- # case "$op" in 00:07:46.473 07:44:08 env -- scripts/common.sh@345 -- # : 1 00:07:46.473 07:44:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.473 07:44:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.473 07:44:08 env -- scripts/common.sh@365 -- # decimal 1 00:07:46.473 07:44:08 env -- scripts/common.sh@353 -- # local d=1 00:07:46.473 07:44:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.473 07:44:08 env -- scripts/common.sh@355 -- # echo 1 00:07:46.473 07:44:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.473 07:44:08 env -- scripts/common.sh@366 -- # decimal 2 00:07:46.473 07:44:08 env -- scripts/common.sh@353 -- # local d=2 00:07:46.473 07:44:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.473 07:44:08 env -- scripts/common.sh@355 -- # echo 2 00:07:46.473 07:44:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.473 07:44:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.473 07:44:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.473 07:44:08 env -- scripts/common.sh@368 -- # return 0 00:07:46.473 07:44:08 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.473 07:44:08 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.473 --rc genhtml_branch_coverage=1 00:07:46.473 --rc genhtml_function_coverage=1 00:07:46.473 --rc genhtml_legend=1 00:07:46.473 --rc geninfo_all_blocks=1 00:07:46.473 --rc geninfo_unexecuted_blocks=1 00:07:46.473 00:07:46.473 ' 00:07:46.473 07:44:08 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.473 --rc genhtml_branch_coverage=1 00:07:46.473 --rc genhtml_function_coverage=1 00:07:46.473 --rc genhtml_legend=1 00:07:46.473 --rc geninfo_all_blocks=1 00:07:46.473 --rc geninfo_unexecuted_blocks=1 00:07:46.473 00:07:46.473 ' 00:07:46.473 07:44:08 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.473 --rc genhtml_branch_coverage=1 00:07:46.473 --rc genhtml_function_coverage=1 00:07:46.473 --rc genhtml_legend=1 00:07:46.473 --rc geninfo_all_blocks=1 00:07:46.473 --rc geninfo_unexecuted_blocks=1 00:07:46.473 00:07:46.473 ' 00:07:46.473 07:44:08 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.473 --rc genhtml_branch_coverage=1 00:07:46.473 --rc genhtml_function_coverage=1 00:07:46.473 --rc genhtml_legend=1 00:07:46.473 --rc geninfo_all_blocks=1 00:07:46.473 --rc geninfo_unexecuted_blocks=1 00:07:46.473 00:07:46.473 ' 00:07:46.473 07:44:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:46.473 07:44:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.473 07:44:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.473 07:44:08 env -- common/autotest_common.sh@10 -- # set +x 00:07:46.473 ************************************ 00:07:46.473 START TEST env_memory 00:07:46.473 ************************************ 00:07:46.473 07:44:08 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:46.473 00:07:46.473 00:07:46.473 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.473 http://cunit.sourceforge.net/ 00:07:46.473 00:07:46.473 00:07:46.473 Suite: memory 00:07:46.473 Test: alloc and free memory map ...[2024-11-06 07:44:09.030543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:46.473 passed 00:07:46.473 Test: mem map translation ...[2024-11-06 07:44:09.092932] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:46.473 [2024-11-06 07:44:09.093186] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:46.473 [2024-11-06 07:44:09.093512] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:46.473 [2024-11-06 07:44:09.093691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:46.732 passed 00:07:46.732 Test: mem map registration ...[2024-11-06 07:44:09.193763] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:46.732 [2024-11-06 07:44:09.194050] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:46.732 passed 00:07:46.732 Test: mem map adjacent registrations ...passed 00:07:46.732 00:07:46.732 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.732 suites 1 1 n/a 0 0 00:07:46.732 tests 4 4 4 0 0 00:07:46.732 asserts 152 152 152 0 n/a 00:07:46.732 00:07:46.732 Elapsed time = 0.347 seconds 00:07:46.732 ************************************ 00:07:46.732 END TEST env_memory 00:07:46.732 ************************************ 00:07:46.732 00:07:46.732 real 0m0.391s 00:07:46.732 user 0m0.351s 00:07:46.732 sys 0m0.029s 00:07:46.732 07:44:09 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.732 07:44:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:46.991 07:44:09 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:46.991 07:44:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.991 07:44:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.991 07:44:09 env -- common/autotest_common.sh@10 -- # set +x 00:07:46.991 ************************************ 00:07:46.991 START TEST env_vtophys 00:07:46.991 ************************************ 00:07:46.991 07:44:09 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:46.991 EAL: lib.eal log level changed from notice to debug 00:07:46.991 EAL: Detected lcore 0 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 1 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 2 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 3 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 4 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 5 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 6 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 7 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 8 as core 0 on socket 0 00:07:46.991 EAL: Detected lcore 9 as core 0 on socket 0 00:07:46.991 EAL: Maximum logical cores by configuration: 128 00:07:46.991 EAL: Detected CPU lcores: 10 00:07:46.991 EAL: Detected NUMA nodes: 1 00:07:46.991 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:46.991 EAL: Detected shared linkage of DPDK 00:07:46.991 EAL: No shared files mode enabled, IPC will be disabled 00:07:46.991 EAL: Selected IOVA mode 'PA' 00:07:46.991 EAL: Probing VFIO support... 00:07:46.991 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:46.991 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:46.992 EAL: Ask a virtual area of 0x2e000 bytes 00:07:46.992 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:46.992 EAL: Setting up physically contiguous memory... 00:07:46.992 EAL: Setting maximum number of open files to 524288 00:07:46.992 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:46.992 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:46.992 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.992 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:46.992 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:46.992 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.992 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:46.992 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:46.992 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.992 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:46.992 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:46.992 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.992 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:46.992 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:46.992 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.992 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:46.992 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:46.992 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.992 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:46.992 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:46.992 EAL: Ask a virtual area of 0x61000 bytes 00:07:46.992 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:46.992 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:46.992 EAL: Ask a virtual area of 0x400000000 bytes 00:07:46.992 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:46.992 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:46.992 EAL: Hugepages will be freed exactly as allocated. 00:07:46.992 EAL: No shared files mode enabled, IPC is disabled 00:07:46.992 EAL: No shared files mode enabled, IPC is disabled 00:07:46.992 EAL: TSC frequency is ~2200000 KHz 00:07:46.992 EAL: Main lcore 0 is ready (tid=7f1e6b40da40;cpuset=[0]) 00:07:46.992 EAL: Trying to obtain current memory policy. 00:07:46.992 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:46.992 EAL: Restoring previous memory policy: 0 00:07:46.992 EAL: request: mp_malloc_sync 00:07:46.992 EAL: No shared files mode enabled, IPC is disabled 00:07:46.992 EAL: Heap on socket 0 was expanded by 2MB 00:07:46.992 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:46.992 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:46.992 EAL: Mem event callback 'spdk:(nil)' registered 00:07:46.992 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:47.249 00:07:47.249 00:07:47.249 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.249 http://cunit.sourceforge.net/ 00:07:47.249 00:07:47.249 00:07:47.249 Suite: components_suite 00:07:47.508 Test: vtophys_malloc_test ...passed 00:07:47.508 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:47.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:47.508 EAL: Restoring previous memory policy: 4 00:07:47.508 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.508 EAL: request: mp_malloc_sync 00:07:47.508 EAL: No shared files mode enabled, IPC is disabled 00:07:47.508 EAL: Heap on socket 0 was expanded by 4MB 00:07:47.508 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.508 EAL: request: mp_malloc_sync 00:07:47.508 EAL: No shared files mode enabled, IPC is disabled 00:07:47.508 EAL: Heap on socket 0 was shrunk by 4MB 00:07:47.508 EAL: Trying to obtain current memory policy. 00:07:47.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:47.508 EAL: Restoring previous memory policy: 4 00:07:47.508 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.508 EAL: request: mp_malloc_sync 00:07:47.508 EAL: No shared files mode enabled, IPC is disabled 00:07:47.508 EAL: Heap on socket 0 was expanded by 6MB 00:07:47.508 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.508 EAL: request: mp_malloc_sync 00:07:47.508 EAL: No shared files mode enabled, IPC is disabled 00:07:47.508 EAL: Heap on socket 0 was shrunk by 6MB 00:07:47.508 EAL: Trying to obtain current memory policy. 00:07:47.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:47.508 EAL: Restoring previous memory policy: 4 00:07:47.508 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.508 EAL: request: mp_malloc_sync 00:07:47.508 EAL: No shared files mode enabled, IPC is disabled 00:07:47.508 EAL: Heap on socket 0 was expanded by 10MB 00:07:47.770 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.770 EAL: request: mp_malloc_sync 00:07:47.770 EAL: No shared files mode enabled, IPC is disabled 00:07:47.770 EAL: Heap on socket 0 was shrunk by 10MB 00:07:47.770 EAL: Trying to obtain current memory policy. 00:07:47.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:47.770 EAL: Restoring previous memory policy: 4 00:07:47.770 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.770 EAL: request: mp_malloc_sync 00:07:47.770 EAL: No shared files mode enabled, IPC is disabled 00:07:47.770 EAL: Heap on socket 0 was expanded by 18MB 00:07:47.770 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.770 EAL: request: mp_malloc_sync 00:07:47.770 EAL: No shared files mode enabled, IPC is disabled 00:07:47.770 EAL: Heap on socket 0 was shrunk by 18MB 00:07:47.770 EAL: Trying to obtain current memory policy. 00:07:47.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:47.770 EAL: Restoring previous memory policy: 4 00:07:47.770 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.770 EAL: request: mp_malloc_sync 00:07:47.770 EAL: No shared files mode enabled, IPC is disabled 00:07:47.770 EAL: Heap on socket 0 was expanded by 34MB 00:07:47.770 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.770 EAL: request: mp_malloc_sync 00:07:47.770 EAL: No shared files mode enabled, IPC is disabled 00:07:47.770 EAL: Heap on socket 0 was shrunk by 34MB 00:07:47.770 EAL: Trying to obtain current memory policy. 00:07:47.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:47.770 EAL: Restoring previous memory policy: 4 00:07:47.770 EAL: Calling mem event callback 'spdk:(nil)' 00:07:47.770 EAL: request: mp_malloc_sync 00:07:47.770 EAL: No shared files mode enabled, IPC is disabled 00:07:47.770 EAL: Heap on socket 0 was expanded by 66MB 00:07:48.055 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.055 EAL: request: mp_malloc_sync 00:07:48.055 EAL: No shared files mode enabled, IPC is disabled 00:07:48.055 EAL: Heap on socket 0 was shrunk by 66MB 00:07:48.055 EAL: Trying to obtain current memory policy. 00:07:48.055 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.055 EAL: Restoring previous memory policy: 4 00:07:48.055 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.055 EAL: request: mp_malloc_sync 00:07:48.055 EAL: No shared files mode enabled, IPC is disabled 00:07:48.055 EAL: Heap on socket 0 was expanded by 130MB 00:07:48.314 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.314 EAL: request: mp_malloc_sync 00:07:48.314 EAL: No shared files mode enabled, IPC is disabled 00:07:48.314 EAL: Heap on socket 0 was shrunk by 130MB 00:07:48.572 EAL: Trying to obtain current memory policy. 00:07:48.572 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:48.572 EAL: Restoring previous memory policy: 4 00:07:48.572 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.572 EAL: request: mp_malloc_sync 00:07:48.572 EAL: No shared files mode enabled, IPC is disabled 00:07:48.572 EAL: Heap on socket 0 was expanded by 258MB 00:07:49.139 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.139 EAL: request: mp_malloc_sync 00:07:49.139 EAL: No shared files mode enabled, IPC is disabled 00:07:49.139 EAL: Heap on socket 0 was shrunk by 258MB 00:07:49.397 EAL: Trying to obtain current memory policy. 00:07:49.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:49.397 EAL: Restoring previous memory policy: 4 00:07:49.397 EAL: Calling mem event callback 'spdk:(nil)' 00:07:49.397 EAL: request: mp_malloc_sync 00:07:49.397 EAL: No shared files mode enabled, IPC is disabled 00:07:49.397 EAL: Heap on socket 0 was expanded by 514MB 00:07:50.333 EAL: Calling mem event callback 'spdk:(nil)' 00:07:50.591 EAL: request: mp_malloc_sync 00:07:50.591 EAL: No shared files mode enabled, IPC is disabled 00:07:50.591 EAL: Heap on socket 0 was shrunk by 514MB 00:07:51.200 EAL: Trying to obtain current memory policy. 00:07:51.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:51.459 EAL: Restoring previous memory policy: 4 00:07:51.459 EAL: Calling mem event callback 'spdk:(nil)' 00:07:51.459 EAL: request: mp_malloc_sync 00:07:51.459 EAL: No shared files mode enabled, IPC is disabled 00:07:51.459 EAL: Heap on socket 0 was expanded by 1026MB 00:07:53.361 EAL: Calling mem event callback 'spdk:(nil)' 00:07:53.361 EAL: request: mp_malloc_sync 00:07:53.361 EAL: No shared files mode enabled, IPC is disabled 00:07:53.361 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:55.263 passed 00:07:55.263 00:07:55.263 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.263 suites 1 1 n/a 0 0 00:07:55.263 tests 2 2 2 0 0 00:07:55.263 asserts 5628 5628 5628 0 n/a 00:07:55.263 00:07:55.263 Elapsed time = 7.700 seconds 00:07:55.263 EAL: Calling mem event callback 'spdk:(nil)' 00:07:55.263 EAL: request: mp_malloc_sync 00:07:55.263 EAL: No shared files mode enabled, IPC is disabled 00:07:55.263 EAL: Heap on socket 0 was shrunk by 2MB 00:07:55.263 EAL: No shared files mode enabled, IPC is disabled 00:07:55.263 EAL: No shared files mode enabled, IPC is disabled 00:07:55.263 EAL: No shared files mode enabled, IPC is disabled 00:07:55.263 ************************************ 00:07:55.263 END TEST env_vtophys 00:07:55.263 ************************************ 00:07:55.263 00:07:55.263 real 0m8.047s 00:07:55.263 user 0m6.847s 00:07:55.263 sys 0m1.027s 00:07:55.263 07:44:17 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.263 07:44:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:55.263 07:44:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:55.263 07:44:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.263 07:44:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.263 07:44:17 env -- common/autotest_common.sh@10 -- # set +x 00:07:55.263 ************************************ 00:07:55.263 START TEST env_pci 00:07:55.263 ************************************ 00:07:55.263 07:44:17 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:55.263 00:07:55.263 00:07:55.263 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.263 http://cunit.sourceforge.net/ 00:07:55.263 00:07:55.263 00:07:55.263 Suite: pci 00:07:55.263 Test: pci_hook ...[2024-11-06 07:44:17.535120] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57913 has claimed it 00:07:55.263 passed 00:07:55.263 00:07:55.263 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.263 suites 1 1 n/a 0 0 00:07:55.263 tests 1 1 1 0 0 00:07:55.263 asserts 25 25 25 0 n/a 00:07:55.263 00:07:55.263 Elapsed time = 0.009 seconds 00:07:55.263 EAL: Cannot find device (10000:00:01.0) 00:07:55.263 EAL: Failed to attach device on primary process 00:07:55.263 00:07:55.263 real 0m0.085s 00:07:55.263 user 0m0.035s 00:07:55.263 sys 0m0.049s 00:07:55.263 07:44:17 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.263 ************************************ 00:07:55.263 END TEST env_pci 00:07:55.263 ************************************ 00:07:55.263 07:44:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:55.263 07:44:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:55.263 07:44:17 env -- env/env.sh@15 -- # uname 00:07:55.263 07:44:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:55.263 07:44:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:55.263 07:44:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:55.263 07:44:17 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:55.263 07:44:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.263 07:44:17 env -- common/autotest_common.sh@10 -- # set +x 00:07:55.263 ************************************ 00:07:55.263 START TEST env_dpdk_post_init 00:07:55.263 ************************************ 00:07:55.263 07:44:17 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:55.263 EAL: Detected CPU lcores: 10 00:07:55.263 EAL: Detected NUMA nodes: 1 00:07:55.263 EAL: Detected shared linkage of DPDK 00:07:55.263 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:55.263 EAL: Selected IOVA mode 'PA' 00:07:55.263 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:55.522 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:55.522 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:55.522 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:55.522 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:55.522 Starting DPDK initialization... 00:07:55.522 Starting SPDK post initialization... 00:07:55.522 SPDK NVMe probe 00:07:55.522 Attaching to 0000:00:10.0 00:07:55.522 Attaching to 0000:00:11.0 00:07:55.522 Attaching to 0000:00:12.0 00:07:55.522 Attaching to 0000:00:13.0 00:07:55.522 Attached to 0000:00:10.0 00:07:55.522 Attached to 0000:00:11.0 00:07:55.522 Attached to 0000:00:13.0 00:07:55.522 Attached to 0000:00:12.0 00:07:55.522 Cleaning up... 00:07:55.522 00:07:55.522 real 0m0.336s 00:07:55.522 user 0m0.118s 00:07:55.522 sys 0m0.120s 00:07:55.522 07:44:17 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.522 ************************************ 00:07:55.522 END TEST env_dpdk_post_init 00:07:55.522 ************************************ 00:07:55.522 07:44:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:55.522 07:44:18 env -- env/env.sh@26 -- # uname 00:07:55.522 07:44:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:55.522 07:44:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:55.522 07:44:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.522 07:44:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.522 07:44:18 env -- common/autotest_common.sh@10 -- # set +x 00:07:55.522 ************************************ 00:07:55.522 START TEST env_mem_callbacks 00:07:55.522 ************************************ 00:07:55.522 07:44:18 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:55.522 EAL: Detected CPU lcores: 10 00:07:55.522 EAL: Detected NUMA nodes: 1 00:07:55.522 EAL: Detected shared linkage of DPDK 00:07:55.522 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:55.522 EAL: Selected IOVA mode 'PA' 00:07:55.781 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:55.781 00:07:55.782 00:07:55.782 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.782 http://cunit.sourceforge.net/ 00:07:55.782 00:07:55.782 00:07:55.782 Suite: memory 00:07:55.782 Test: test ... 00:07:55.782 register 0x200000200000 2097152 00:07:55.782 malloc 3145728 00:07:55.782 register 0x200000400000 4194304 00:07:55.782 buf 0x2000004fffc0 len 3145728 PASSED 00:07:55.782 malloc 64 00:07:55.782 buf 0x2000004ffec0 len 64 PASSED 00:07:55.782 malloc 4194304 00:07:55.782 register 0x200000800000 6291456 00:07:55.782 buf 0x2000009fffc0 len 4194304 PASSED 00:07:55.782 free 0x2000004fffc0 3145728 00:07:55.782 free 0x2000004ffec0 64 00:07:55.782 unregister 0x200000400000 4194304 PASSED 00:07:55.782 free 0x2000009fffc0 4194304 00:07:55.782 unregister 0x200000800000 6291456 PASSED 00:07:55.782 malloc 8388608 00:07:55.782 register 0x200000400000 10485760 00:07:55.782 buf 0x2000005fffc0 len 8388608 PASSED 00:07:55.782 free 0x2000005fffc0 8388608 00:07:55.782 unregister 0x200000400000 10485760 PASSED 00:07:55.782 passed 00:07:55.782 00:07:55.782 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.782 suites 1 1 n/a 0 0 00:07:55.782 tests 1 1 1 0 0 00:07:55.782 asserts 15 15 15 0 n/a 00:07:55.782 00:07:55.782 Elapsed time = 0.075 seconds 00:07:55.782 00:07:55.782 real 0m0.273s 00:07:55.782 user 0m0.105s 00:07:55.782 sys 0m0.066s 00:07:55.782 07:44:18 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.782 07:44:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:55.782 ************************************ 00:07:55.782 END TEST env_mem_callbacks 00:07:55.782 ************************************ 00:07:55.782 00:07:55.782 real 0m9.589s 00:07:55.782 user 0m7.651s 00:07:55.782 sys 0m1.535s 00:07:55.782 07:44:18 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.782 ************************************ 00:07:55.782 END TEST env 00:07:55.782 ************************************ 00:07:55.782 07:44:18 env -- common/autotest_common.sh@10 -- # set +x 00:07:55.782 07:44:18 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:55.782 07:44:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.782 07:44:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.782 07:44:18 -- common/autotest_common.sh@10 -- # set +x 00:07:55.782 ************************************ 00:07:55.782 START TEST rpc 00:07:55.782 ************************************ 00:07:55.782 07:44:18 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:56.041 * Looking for test storage... 00:07:56.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:56.041 07:44:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.041 07:44:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.041 07:44:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.041 07:44:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.041 07:44:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.041 07:44:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.041 07:44:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.041 07:44:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.041 07:44:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.041 07:44:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.041 07:44:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.041 07:44:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:56.041 07:44:18 rpc -- scripts/common.sh@345 -- # : 1 00:07:56.041 07:44:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.041 07:44:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.041 07:44:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:56.041 07:44:18 rpc -- scripts/common.sh@353 -- # local d=1 00:07:56.041 07:44:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.041 07:44:18 rpc -- scripts/common.sh@355 -- # echo 1 00:07:56.041 07:44:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.041 07:44:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:56.041 07:44:18 rpc -- scripts/common.sh@353 -- # local d=2 00:07:56.041 07:44:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.041 07:44:18 rpc -- scripts/common.sh@355 -- # echo 2 00:07:56.041 07:44:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.041 07:44:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.041 07:44:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.041 07:44:18 rpc -- scripts/common.sh@368 -- # return 0 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:56.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.041 --rc genhtml_branch_coverage=1 00:07:56.041 --rc genhtml_function_coverage=1 00:07:56.041 --rc genhtml_legend=1 00:07:56.041 --rc geninfo_all_blocks=1 00:07:56.041 --rc geninfo_unexecuted_blocks=1 00:07:56.041 00:07:56.041 ' 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:56.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.041 --rc genhtml_branch_coverage=1 00:07:56.041 --rc genhtml_function_coverage=1 00:07:56.041 --rc genhtml_legend=1 00:07:56.041 --rc geninfo_all_blocks=1 00:07:56.041 --rc geninfo_unexecuted_blocks=1 00:07:56.041 00:07:56.041 ' 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:56.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.041 --rc genhtml_branch_coverage=1 00:07:56.041 --rc genhtml_function_coverage=1 00:07:56.041 --rc genhtml_legend=1 00:07:56.041 --rc geninfo_all_blocks=1 00:07:56.041 --rc geninfo_unexecuted_blocks=1 00:07:56.041 00:07:56.041 ' 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:56.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.041 --rc genhtml_branch_coverage=1 00:07:56.041 --rc genhtml_function_coverage=1 00:07:56.041 --rc genhtml_legend=1 00:07:56.041 --rc geninfo_all_blocks=1 00:07:56.041 --rc geninfo_unexecuted_blocks=1 00:07:56.041 00:07:56.041 ' 00:07:56.041 07:44:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58040 00:07:56.041 07:44:18 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:56.041 07:44:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:56.041 07:44:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58040 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@831 -- # '[' -z 58040 ']' 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.041 07:44:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.300 [2024-11-06 07:44:18.717580] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:07:56.300 [2024-11-06 07:44:18.717783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58040 ] 00:07:56.300 [2024-11-06 07:44:18.904952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.559 [2024-11-06 07:44:19.039591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:56.559 [2024-11-06 07:44:19.039669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58040' to capture a snapshot of events at runtime. 00:07:56.559 [2024-11-06 07:44:19.039687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.559 [2024-11-06 07:44:19.039703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.559 [2024-11-06 07:44:19.039715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58040 for offline analysis/debug. 00:07:56.559 [2024-11-06 07:44:19.041016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.568 07:44:19 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.568 07:44:19 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:57.568 07:44:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:57.568 07:44:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:57.568 07:44:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:57.568 07:44:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:57.568 07:44:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.568 07:44:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.568 07:44:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.568 ************************************ 00:07:57.568 START TEST rpc_integrity 00:07:57.568 ************************************ 00:07:57.568 07:44:19 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:57.568 07:44:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:57.568 07:44:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.568 07:44:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.568 07:44:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.568 07:44:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:57.568 07:44:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:57.568 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:57.568 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:57.568 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.568 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.568 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.568 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:57.568 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:57.568 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.568 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.568 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.568 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:57.568 { 00:07:57.568 "name": "Malloc0", 00:07:57.568 "aliases": [ 00:07:57.568 "41a91e1f-4f01-4862-901b-793355d2c93a" 00:07:57.568 ], 00:07:57.568 "product_name": "Malloc disk", 00:07:57.568 "block_size": 512, 00:07:57.568 "num_blocks": 16384, 00:07:57.568 "uuid": "41a91e1f-4f01-4862-901b-793355d2c93a", 00:07:57.568 "assigned_rate_limits": { 00:07:57.568 "rw_ios_per_sec": 0, 00:07:57.568 "rw_mbytes_per_sec": 0, 00:07:57.568 "r_mbytes_per_sec": 0, 00:07:57.568 "w_mbytes_per_sec": 0 00:07:57.568 }, 00:07:57.568 "claimed": false, 00:07:57.568 "zoned": false, 00:07:57.568 "supported_io_types": { 00:07:57.568 "read": true, 00:07:57.568 "write": true, 00:07:57.568 "unmap": true, 00:07:57.568 "flush": true, 00:07:57.568 "reset": true, 00:07:57.568 "nvme_admin": false, 00:07:57.568 "nvme_io": false, 00:07:57.568 "nvme_io_md": false, 00:07:57.568 "write_zeroes": true, 00:07:57.568 "zcopy": true, 00:07:57.568 "get_zone_info": false, 00:07:57.568 "zone_management": false, 00:07:57.568 "zone_append": false, 00:07:57.568 "compare": false, 00:07:57.568 "compare_and_write": false, 00:07:57.568 "abort": true, 00:07:57.568 "seek_hole": false, 00:07:57.568 "seek_data": false, 00:07:57.568 "copy": true, 00:07:57.568 "nvme_iov_md": false 00:07:57.568 }, 00:07:57.568 "memory_domains": [ 00:07:57.568 { 00:07:57.568 "dma_device_id": "system", 00:07:57.568 "dma_device_type": 1 00:07:57.568 }, 00:07:57.568 { 00:07:57.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.568 "dma_device_type": 2 00:07:57.568 } 00:07:57.568 ], 00:07:57.568 "driver_specific": {} 00:07:57.568 } 00:07:57.568 ]' 00:07:57.568 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:57.568 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:57.569 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:57.569 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.569 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.569 [2024-11-06 07:44:20.139890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:57.569 [2024-11-06 07:44:20.139979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.569 [2024-11-06 07:44:20.140018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:57.569 [2024-11-06 07:44:20.140038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.569 [2024-11-06 07:44:20.143140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.569 [2024-11-06 07:44:20.143199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:57.569 Passthru0 00:07:57.569 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.569 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:57.569 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.569 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.569 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.569 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:57.569 { 00:07:57.569 "name": "Malloc0", 00:07:57.569 "aliases": [ 00:07:57.569 "41a91e1f-4f01-4862-901b-793355d2c93a" 00:07:57.569 ], 00:07:57.569 "product_name": "Malloc disk", 00:07:57.569 "block_size": 512, 00:07:57.569 "num_blocks": 16384, 00:07:57.569 "uuid": "41a91e1f-4f01-4862-901b-793355d2c93a", 00:07:57.569 "assigned_rate_limits": { 00:07:57.569 "rw_ios_per_sec": 0, 00:07:57.569 "rw_mbytes_per_sec": 0, 00:07:57.569 "r_mbytes_per_sec": 0, 00:07:57.569 "w_mbytes_per_sec": 0 00:07:57.569 }, 00:07:57.569 "claimed": true, 00:07:57.569 "claim_type": "exclusive_write", 00:07:57.569 "zoned": false, 00:07:57.569 "supported_io_types": { 00:07:57.569 "read": true, 00:07:57.569 "write": true, 00:07:57.569 "unmap": true, 00:07:57.569 "flush": true, 00:07:57.569 "reset": true, 00:07:57.569 "nvme_admin": false, 00:07:57.569 "nvme_io": false, 00:07:57.569 "nvme_io_md": false, 00:07:57.569 "write_zeroes": true, 00:07:57.569 "zcopy": true, 00:07:57.569 "get_zone_info": false, 00:07:57.569 "zone_management": false, 00:07:57.569 "zone_append": false, 00:07:57.569 "compare": false, 00:07:57.569 "compare_and_write": false, 00:07:57.569 "abort": true, 00:07:57.569 "seek_hole": false, 00:07:57.569 "seek_data": false, 00:07:57.569 "copy": true, 00:07:57.569 "nvme_iov_md": false 00:07:57.569 }, 00:07:57.569 "memory_domains": [ 00:07:57.569 { 00:07:57.569 "dma_device_id": "system", 00:07:57.569 "dma_device_type": 1 00:07:57.569 }, 00:07:57.569 { 00:07:57.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.569 "dma_device_type": 2 00:07:57.569 } 00:07:57.569 ], 00:07:57.569 "driver_specific": {} 00:07:57.569 }, 00:07:57.569 { 00:07:57.569 "name": "Passthru0", 00:07:57.569 "aliases": [ 00:07:57.569 "0a6f1238-e7d1-5ae0-9181-f5dc11a6caa3" 00:07:57.569 ], 00:07:57.569 "product_name": "passthru", 00:07:57.569 "block_size": 512, 00:07:57.569 "num_blocks": 16384, 00:07:57.569 "uuid": "0a6f1238-e7d1-5ae0-9181-f5dc11a6caa3", 00:07:57.569 "assigned_rate_limits": { 00:07:57.569 "rw_ios_per_sec": 0, 00:07:57.569 "rw_mbytes_per_sec": 0, 00:07:57.569 "r_mbytes_per_sec": 0, 00:07:57.569 "w_mbytes_per_sec": 0 00:07:57.569 }, 00:07:57.569 "claimed": false, 00:07:57.569 "zoned": false, 00:07:57.569 "supported_io_types": { 00:07:57.569 "read": true, 00:07:57.569 "write": true, 00:07:57.569 "unmap": true, 00:07:57.569 "flush": true, 00:07:57.569 "reset": true, 00:07:57.569 "nvme_admin": false, 00:07:57.569 "nvme_io": false, 00:07:57.569 "nvme_io_md": false, 00:07:57.569 "write_zeroes": true, 00:07:57.569 "zcopy": true, 00:07:57.569 "get_zone_info": false, 00:07:57.569 "zone_management": false, 00:07:57.569 "zone_append": false, 00:07:57.569 "compare": false, 00:07:57.569 "compare_and_write": false, 00:07:57.569 "abort": true, 00:07:57.569 "seek_hole": false, 00:07:57.569 "seek_data": false, 00:07:57.569 "copy": true, 00:07:57.569 "nvme_iov_md": false 00:07:57.569 }, 00:07:57.569 "memory_domains": [ 00:07:57.569 { 00:07:57.569 "dma_device_id": "system", 00:07:57.569 "dma_device_type": 1 00:07:57.569 }, 00:07:57.569 { 00:07:57.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.569 "dma_device_type": 2 00:07:57.569 } 00:07:57.569 ], 00:07:57.569 "driver_specific": { 00:07:57.569 "passthru": { 00:07:57.569 "name": "Passthru0", 00:07:57.569 "base_bdev_name": "Malloc0" 00:07:57.569 } 00:07:57.569 } 00:07:57.569 } 00:07:57.569 ]' 00:07:57.569 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:57.828 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:57.828 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.828 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.828 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.828 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:57.828 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:57.828 07:44:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:57.828 00:07:57.828 real 0m0.359s 00:07:57.828 user 0m0.220s 00:07:57.828 sys 0m0.041s 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.828 ************************************ 00:07:57.828 END TEST rpc_integrity 00:07:57.828 07:44:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 ************************************ 00:07:57.828 07:44:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:57.828 07:44:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.828 07:44:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.828 07:44:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 ************************************ 00:07:57.828 START TEST rpc_plugins 00:07:57.828 ************************************ 00:07:57.828 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:57.828 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:57.828 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.828 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.828 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:57.828 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:57.828 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.828 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.828 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:57.828 { 00:07:57.828 "name": "Malloc1", 00:07:57.828 "aliases": [ 00:07:57.828 "9f7469f8-7d33-4bf0-8261-25d37fb46879" 00:07:57.828 ], 00:07:57.828 "product_name": "Malloc disk", 00:07:57.828 "block_size": 4096, 00:07:57.828 "num_blocks": 256, 00:07:57.828 "uuid": "9f7469f8-7d33-4bf0-8261-25d37fb46879", 00:07:57.828 "assigned_rate_limits": { 00:07:57.828 "rw_ios_per_sec": 0, 00:07:57.828 "rw_mbytes_per_sec": 0, 00:07:57.828 "r_mbytes_per_sec": 0, 00:07:57.828 "w_mbytes_per_sec": 0 00:07:57.828 }, 00:07:57.828 "claimed": false, 00:07:57.828 "zoned": false, 00:07:57.828 "supported_io_types": { 00:07:57.828 "read": true, 00:07:57.828 "write": true, 00:07:57.828 "unmap": true, 00:07:57.828 "flush": true, 00:07:57.828 "reset": true, 00:07:57.828 "nvme_admin": false, 00:07:57.828 "nvme_io": false, 00:07:57.828 "nvme_io_md": false, 00:07:57.828 "write_zeroes": true, 00:07:57.828 "zcopy": true, 00:07:57.828 "get_zone_info": false, 00:07:57.828 "zone_management": false, 00:07:57.828 "zone_append": false, 00:07:57.828 "compare": false, 00:07:57.828 "compare_and_write": false, 00:07:57.828 "abort": true, 00:07:57.828 "seek_hole": false, 00:07:57.828 "seek_data": false, 00:07:57.828 "copy": true, 00:07:57.828 "nvme_iov_md": false 00:07:57.828 }, 00:07:57.828 "memory_domains": [ 00:07:57.828 { 00:07:57.828 "dma_device_id": "system", 00:07:57.828 "dma_device_type": 1 00:07:57.828 }, 00:07:57.828 { 00:07:57.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.828 "dma_device_type": 2 00:07:57.828 } 00:07:57.828 ], 00:07:57.828 "driver_specific": {} 00:07:57.828 } 00:07:57.828 ]' 00:07:57.828 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:58.087 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:58.087 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:58.087 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.087 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:58.087 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.087 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:58.087 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.087 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:58.087 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.087 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:58.087 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:58.087 07:44:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:58.087 00:07:58.087 real 0m0.162s 00:07:58.087 user 0m0.098s 00:07:58.087 sys 0m0.020s 00:07:58.087 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.087 07:44:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:58.087 ************************************ 00:07:58.087 END TEST rpc_plugins 00:07:58.087 ************************************ 00:07:58.087 07:44:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:58.087 07:44:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.087 07:44:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.087 07:44:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.087 ************************************ 00:07:58.087 START TEST rpc_trace_cmd_test 00:07:58.087 ************************************ 00:07:58.087 07:44:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:58.087 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:58.087 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:58.087 07:44:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.087 07:44:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.087 07:44:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.087 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:58.087 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58040", 00:07:58.087 "tpoint_group_mask": "0x8", 00:07:58.087 "iscsi_conn": { 00:07:58.087 "mask": "0x2", 00:07:58.087 "tpoint_mask": "0x0" 00:07:58.087 }, 00:07:58.087 "scsi": { 00:07:58.087 "mask": "0x4", 00:07:58.087 "tpoint_mask": "0x0" 00:07:58.087 }, 00:07:58.087 "bdev": { 00:07:58.087 "mask": "0x8", 00:07:58.087 "tpoint_mask": "0xffffffffffffffff" 00:07:58.087 }, 00:07:58.087 "nvmf_rdma": { 00:07:58.087 "mask": "0x10", 00:07:58.087 "tpoint_mask": "0x0" 00:07:58.087 }, 00:07:58.087 "nvmf_tcp": { 00:07:58.087 "mask": "0x20", 00:07:58.087 "tpoint_mask": "0x0" 00:07:58.087 }, 00:07:58.088 "ftl": { 00:07:58.088 "mask": "0x40", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "blobfs": { 00:07:58.088 "mask": "0x80", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "dsa": { 00:07:58.088 "mask": "0x200", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "thread": { 00:07:58.088 "mask": "0x400", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "nvme_pcie": { 00:07:58.088 "mask": "0x800", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "iaa": { 00:07:58.088 "mask": "0x1000", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "nvme_tcp": { 00:07:58.088 "mask": "0x2000", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "bdev_nvme": { 00:07:58.088 "mask": "0x4000", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "sock": { 00:07:58.088 "mask": "0x8000", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "blob": { 00:07:58.088 "mask": "0x10000", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "bdev_raid": { 00:07:58.088 "mask": "0x20000", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 }, 00:07:58.088 "scheduler": { 00:07:58.088 "mask": "0x40000", 00:07:58.088 "tpoint_mask": "0x0" 00:07:58.088 } 00:07:58.088 }' 00:07:58.088 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:58.088 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:58.088 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:58.088 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:58.088 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:58.347 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:58.347 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:58.347 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:58.347 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:58.347 07:44:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:58.347 00:07:58.347 real 0m0.259s 00:07:58.347 user 0m0.221s 00:07:58.347 sys 0m0.030s 00:07:58.347 07:44:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.347 07:44:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.347 ************************************ 00:07:58.347 END TEST rpc_trace_cmd_test 00:07:58.347 ************************************ 00:07:58.347 07:44:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:58.347 07:44:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:58.347 07:44:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:58.347 07:44:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.347 07:44:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.347 07:44:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.347 ************************************ 00:07:58.347 START TEST rpc_daemon_integrity 00:07:58.347 ************************************ 00:07:58.347 07:44:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:58.347 07:44:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:58.347 07:44:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.347 07:44:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.347 07:44:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.347 07:44:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:58.347 07:44:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:58.605 07:44:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:58.605 07:44:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:58.605 07:44:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.605 07:44:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.605 07:44:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.605 07:44:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:58.605 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:58.605 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.605 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:58.606 { 00:07:58.606 "name": "Malloc2", 00:07:58.606 "aliases": [ 00:07:58.606 "68bf5e66-ac6e-4061-abcd-72ecd26cbb29" 00:07:58.606 ], 00:07:58.606 "product_name": "Malloc disk", 00:07:58.606 "block_size": 512, 00:07:58.606 "num_blocks": 16384, 00:07:58.606 "uuid": "68bf5e66-ac6e-4061-abcd-72ecd26cbb29", 00:07:58.606 "assigned_rate_limits": { 00:07:58.606 "rw_ios_per_sec": 0, 00:07:58.606 "rw_mbytes_per_sec": 0, 00:07:58.606 "r_mbytes_per_sec": 0, 00:07:58.606 "w_mbytes_per_sec": 0 00:07:58.606 }, 00:07:58.606 "claimed": false, 00:07:58.606 "zoned": false, 00:07:58.606 "supported_io_types": { 00:07:58.606 "read": true, 00:07:58.606 "write": true, 00:07:58.606 "unmap": true, 00:07:58.606 "flush": true, 00:07:58.606 "reset": true, 00:07:58.606 "nvme_admin": false, 00:07:58.606 "nvme_io": false, 00:07:58.606 "nvme_io_md": false, 00:07:58.606 "write_zeroes": true, 00:07:58.606 "zcopy": true, 00:07:58.606 "get_zone_info": false, 00:07:58.606 "zone_management": false, 00:07:58.606 "zone_append": false, 00:07:58.606 "compare": false, 00:07:58.606 "compare_and_write": false, 00:07:58.606 "abort": true, 00:07:58.606 "seek_hole": false, 00:07:58.606 "seek_data": false, 00:07:58.606 "copy": true, 00:07:58.606 "nvme_iov_md": false 00:07:58.606 }, 00:07:58.606 "memory_domains": [ 00:07:58.606 { 00:07:58.606 "dma_device_id": "system", 00:07:58.606 "dma_device_type": 1 00:07:58.606 }, 00:07:58.606 { 00:07:58.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.606 "dma_device_type": 2 00:07:58.606 } 00:07:58.606 ], 00:07:58.606 "driver_specific": {} 00:07:58.606 } 00:07:58.606 ]' 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.606 [2024-11-06 07:44:21.086801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:58.606 [2024-11-06 07:44:21.086900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.606 [2024-11-06 07:44:21.086933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:58.606 [2024-11-06 07:44:21.086967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.606 [2024-11-06 07:44:21.090139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.606 [2024-11-06 07:44:21.090191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:58.606 Passthru0 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:58.606 { 00:07:58.606 "name": "Malloc2", 00:07:58.606 "aliases": [ 00:07:58.606 "68bf5e66-ac6e-4061-abcd-72ecd26cbb29" 00:07:58.606 ], 00:07:58.606 "product_name": "Malloc disk", 00:07:58.606 "block_size": 512, 00:07:58.606 "num_blocks": 16384, 00:07:58.606 "uuid": "68bf5e66-ac6e-4061-abcd-72ecd26cbb29", 00:07:58.606 "assigned_rate_limits": { 00:07:58.606 "rw_ios_per_sec": 0, 00:07:58.606 "rw_mbytes_per_sec": 0, 00:07:58.606 "r_mbytes_per_sec": 0, 00:07:58.606 "w_mbytes_per_sec": 0 00:07:58.606 }, 00:07:58.606 "claimed": true, 00:07:58.606 "claim_type": "exclusive_write", 00:07:58.606 "zoned": false, 00:07:58.606 "supported_io_types": { 00:07:58.606 "read": true, 00:07:58.606 "write": true, 00:07:58.606 "unmap": true, 00:07:58.606 "flush": true, 00:07:58.606 "reset": true, 00:07:58.606 "nvme_admin": false, 00:07:58.606 "nvme_io": false, 00:07:58.606 "nvme_io_md": false, 00:07:58.606 "write_zeroes": true, 00:07:58.606 "zcopy": true, 00:07:58.606 "get_zone_info": false, 00:07:58.606 "zone_management": false, 00:07:58.606 "zone_append": false, 00:07:58.606 "compare": false, 00:07:58.606 "compare_and_write": false, 00:07:58.606 "abort": true, 00:07:58.606 "seek_hole": false, 00:07:58.606 "seek_data": false, 00:07:58.606 "copy": true, 00:07:58.606 "nvme_iov_md": false 00:07:58.606 }, 00:07:58.606 "memory_domains": [ 00:07:58.606 { 00:07:58.606 "dma_device_id": "system", 00:07:58.606 "dma_device_type": 1 00:07:58.606 }, 00:07:58.606 { 00:07:58.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.606 "dma_device_type": 2 00:07:58.606 } 00:07:58.606 ], 00:07:58.606 "driver_specific": {} 00:07:58.606 }, 00:07:58.606 { 00:07:58.606 "name": "Passthru0", 00:07:58.606 "aliases": [ 00:07:58.606 "6de28441-2d45-5995-b94a-b681f13e1a7a" 00:07:58.606 ], 00:07:58.606 "product_name": "passthru", 00:07:58.606 "block_size": 512, 00:07:58.606 "num_blocks": 16384, 00:07:58.606 "uuid": "6de28441-2d45-5995-b94a-b681f13e1a7a", 00:07:58.606 "assigned_rate_limits": { 00:07:58.606 "rw_ios_per_sec": 0, 00:07:58.606 "rw_mbytes_per_sec": 0, 00:07:58.606 "r_mbytes_per_sec": 0, 00:07:58.606 "w_mbytes_per_sec": 0 00:07:58.606 }, 00:07:58.606 "claimed": false, 00:07:58.606 "zoned": false, 00:07:58.606 "supported_io_types": { 00:07:58.606 "read": true, 00:07:58.606 "write": true, 00:07:58.606 "unmap": true, 00:07:58.606 "flush": true, 00:07:58.606 "reset": true, 00:07:58.606 "nvme_admin": false, 00:07:58.606 "nvme_io": false, 00:07:58.606 "nvme_io_md": false, 00:07:58.606 "write_zeroes": true, 00:07:58.606 "zcopy": true, 00:07:58.606 "get_zone_info": false, 00:07:58.606 "zone_management": false, 00:07:58.606 "zone_append": false, 00:07:58.606 "compare": false, 00:07:58.606 "compare_and_write": false, 00:07:58.606 "abort": true, 00:07:58.606 "seek_hole": false, 00:07:58.606 "seek_data": false, 00:07:58.606 "copy": true, 00:07:58.606 "nvme_iov_md": false 00:07:58.606 }, 00:07:58.606 "memory_domains": [ 00:07:58.606 { 00:07:58.606 "dma_device_id": "system", 00:07:58.606 "dma_device_type": 1 00:07:58.606 }, 00:07:58.606 { 00:07:58.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.606 "dma_device_type": 2 00:07:58.606 } 00:07:58.606 ], 00:07:58.606 "driver_specific": { 00:07:58.606 "passthru": { 00:07:58.606 "name": "Passthru0", 00:07:58.606 "base_bdev_name": "Malloc2" 00:07:58.606 } 00:07:58.606 } 00:07:58.606 } 00:07:58.606 ]' 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.606 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.865 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:58.865 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:58.865 07:44:21 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:58.865 00:07:58.865 real 0m0.374s 00:07:58.865 user 0m0.223s 00:07:58.865 sys 0m0.051s 00:07:58.865 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.865 ************************************ 00:07:58.865 END TEST rpc_daemon_integrity 00:07:58.865 07:44:21 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 ************************************ 00:07:58.865 07:44:21 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:58.865 07:44:21 rpc -- rpc/rpc.sh@84 -- # killprocess 58040 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@950 -- # '[' -z 58040 ']' 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@954 -- # kill -0 58040 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@955 -- # uname 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58040 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.865 killing process with pid 58040 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58040' 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@969 -- # kill 58040 00:07:58.865 07:44:21 rpc -- common/autotest_common.sh@974 -- # wait 58040 00:08:01.397 00:08:01.397 real 0m5.347s 00:08:01.397 user 0m6.084s 00:08:01.397 sys 0m0.958s 00:08:01.397 07:44:23 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.397 ************************************ 00:08:01.397 END TEST rpc 00:08:01.397 ************************************ 00:08:01.397 07:44:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.397 07:44:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:01.397 07:44:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.397 07:44:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.397 07:44:23 -- common/autotest_common.sh@10 -- # set +x 00:08:01.397 ************************************ 00:08:01.397 START TEST skip_rpc 00:08:01.397 ************************************ 00:08:01.397 07:44:23 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:01.397 * Looking for test storage... 00:08:01.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:01.397 07:44:23 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:01.397 07:44:23 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:08:01.397 07:44:23 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:01.397 07:44:23 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.397 07:44:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.398 07:44:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:01.398 07:44:23 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.398 07:44:23 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:01.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.398 --rc genhtml_branch_coverage=1 00:08:01.398 --rc genhtml_function_coverage=1 00:08:01.398 --rc genhtml_legend=1 00:08:01.398 --rc geninfo_all_blocks=1 00:08:01.398 --rc geninfo_unexecuted_blocks=1 00:08:01.398 00:08:01.398 ' 00:08:01.398 07:44:23 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:01.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.398 --rc genhtml_branch_coverage=1 00:08:01.398 --rc genhtml_function_coverage=1 00:08:01.398 --rc genhtml_legend=1 00:08:01.398 --rc geninfo_all_blocks=1 00:08:01.398 --rc geninfo_unexecuted_blocks=1 00:08:01.398 00:08:01.398 ' 00:08:01.398 07:44:23 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:01.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.398 --rc genhtml_branch_coverage=1 00:08:01.398 --rc genhtml_function_coverage=1 00:08:01.398 --rc genhtml_legend=1 00:08:01.398 --rc geninfo_all_blocks=1 00:08:01.398 --rc geninfo_unexecuted_blocks=1 00:08:01.398 00:08:01.398 ' 00:08:01.398 07:44:23 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:01.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.398 --rc genhtml_branch_coverage=1 00:08:01.398 --rc genhtml_function_coverage=1 00:08:01.398 --rc genhtml_legend=1 00:08:01.398 --rc geninfo_all_blocks=1 00:08:01.398 --rc geninfo_unexecuted_blocks=1 00:08:01.398 00:08:01.398 ' 00:08:01.398 07:44:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:01.398 07:44:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:01.398 07:44:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:01.398 07:44:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.398 07:44:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.398 07:44:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.398 ************************************ 00:08:01.398 START TEST skip_rpc 00:08:01.398 ************************************ 00:08:01.398 07:44:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:01.398 07:44:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58269 00:08:01.398 07:44:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:01.398 07:44:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:01.398 07:44:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:01.656 [2024-11-06 07:44:24.127020] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:01.656 [2024-11-06 07:44:24.127279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58269 ] 00:08:01.914 [2024-11-06 07:44:24.323994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.914 [2024-11-06 07:44:24.496217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.228 07:44:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58269 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58269 ']' 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58269 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58269 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.228 killing process with pid 58269 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58269' 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58269 00:08:07.228 07:44:29 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58269 00:08:09.169 00:08:09.169 real 0m7.319s 00:08:09.169 user 0m6.695s 00:08:09.169 sys 0m0.517s 00:08:09.169 07:44:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.169 07:44:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.169 ************************************ 00:08:09.169 END TEST skip_rpc 00:08:09.169 ************************************ 00:08:09.169 07:44:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:09.169 07:44:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.169 07:44:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.169 07:44:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.169 ************************************ 00:08:09.169 START TEST skip_rpc_with_json 00:08:09.169 ************************************ 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58373 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58373 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58373 ']' 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.169 07:44:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:09.169 [2024-11-06 07:44:31.493913] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:09.169 [2024-11-06 07:44:31.494108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58373 ] 00:08:09.170 [2024-11-06 07:44:31.680205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.428 [2024-11-06 07:44:31.829840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 [2024-11-06 07:44:32.769863] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:10.402 request: 00:08:10.402 { 00:08:10.402 "trtype": "tcp", 00:08:10.402 "method": "nvmf_get_transports", 00:08:10.402 "req_id": 1 00:08:10.402 } 00:08:10.402 Got JSON-RPC error response 00:08:10.402 response: 00:08:10.402 { 00:08:10.402 "code": -19, 00:08:10.402 "message": "No such device" 00:08:10.402 } 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 [2024-11-06 07:44:32.782053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.402 07:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:10.402 { 00:08:10.402 "subsystems": [ 00:08:10.402 { 00:08:10.402 "subsystem": "fsdev", 00:08:10.402 "config": [ 00:08:10.402 { 00:08:10.402 "method": "fsdev_set_opts", 00:08:10.402 "params": { 00:08:10.402 "fsdev_io_pool_size": 65535, 00:08:10.402 "fsdev_io_cache_size": 256 00:08:10.402 } 00:08:10.402 } 00:08:10.402 ] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "keyring", 00:08:10.402 "config": [] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "iobuf", 00:08:10.402 "config": [ 00:08:10.402 { 00:08:10.402 "method": "iobuf_set_options", 00:08:10.402 "params": { 00:08:10.402 "small_pool_count": 8192, 00:08:10.402 "large_pool_count": 1024, 00:08:10.402 "small_bufsize": 8192, 00:08:10.402 "large_bufsize": 135168, 00:08:10.402 "enable_numa": false 00:08:10.402 } 00:08:10.402 } 00:08:10.402 ] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "sock", 00:08:10.402 "config": [ 00:08:10.402 { 00:08:10.402 "method": "sock_set_default_impl", 00:08:10.402 "params": { 00:08:10.402 "impl_name": "posix" 00:08:10.402 } 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "method": "sock_impl_set_options", 00:08:10.402 "params": { 00:08:10.402 "impl_name": "ssl", 00:08:10.402 "recv_buf_size": 4096, 00:08:10.402 "send_buf_size": 4096, 00:08:10.402 "enable_recv_pipe": true, 00:08:10.402 "enable_quickack": false, 00:08:10.402 "enable_placement_id": 0, 00:08:10.402 "enable_zerocopy_send_server": true, 00:08:10.402 "enable_zerocopy_send_client": false, 00:08:10.402 "zerocopy_threshold": 0, 00:08:10.402 "tls_version": 0, 00:08:10.402 "enable_ktls": false 00:08:10.402 } 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "method": "sock_impl_set_options", 00:08:10.402 "params": { 00:08:10.402 "impl_name": "posix", 00:08:10.402 "recv_buf_size": 2097152, 00:08:10.402 "send_buf_size": 2097152, 00:08:10.402 "enable_recv_pipe": true, 00:08:10.402 "enable_quickack": false, 00:08:10.402 "enable_placement_id": 0, 00:08:10.402 "enable_zerocopy_send_server": true, 00:08:10.402 "enable_zerocopy_send_client": false, 00:08:10.402 "zerocopy_threshold": 0, 00:08:10.402 "tls_version": 0, 00:08:10.402 "enable_ktls": false 00:08:10.402 } 00:08:10.402 } 00:08:10.402 ] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "vmd", 00:08:10.402 "config": [] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "accel", 00:08:10.402 "config": [ 00:08:10.402 { 00:08:10.402 "method": "accel_set_options", 00:08:10.402 "params": { 00:08:10.402 "small_cache_size": 128, 00:08:10.402 "large_cache_size": 16, 00:08:10.402 "task_count": 2048, 00:08:10.402 "sequence_count": 2048, 00:08:10.402 "buf_count": 2048 00:08:10.402 } 00:08:10.402 } 00:08:10.402 ] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "bdev", 00:08:10.402 "config": [ 00:08:10.402 { 00:08:10.402 "method": "bdev_set_options", 00:08:10.402 "params": { 00:08:10.402 "bdev_io_pool_size": 65535, 00:08:10.402 "bdev_io_cache_size": 256, 00:08:10.402 "bdev_auto_examine": true, 00:08:10.402 "iobuf_small_cache_size": 128, 00:08:10.402 "iobuf_large_cache_size": 16 00:08:10.402 } 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "method": "bdev_raid_set_options", 00:08:10.402 "params": { 00:08:10.402 "process_window_size_kb": 1024, 00:08:10.402 "process_max_bandwidth_mb_sec": 0 00:08:10.402 } 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "method": "bdev_iscsi_set_options", 00:08:10.402 "params": { 00:08:10.402 "timeout_sec": 30 00:08:10.402 } 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "method": "bdev_nvme_set_options", 00:08:10.402 "params": { 00:08:10.402 "action_on_timeout": "none", 00:08:10.402 "timeout_us": 0, 00:08:10.402 "timeout_admin_us": 0, 00:08:10.402 "keep_alive_timeout_ms": 10000, 00:08:10.402 "arbitration_burst": 0, 00:08:10.402 "low_priority_weight": 0, 00:08:10.402 "medium_priority_weight": 0, 00:08:10.402 "high_priority_weight": 0, 00:08:10.402 "nvme_adminq_poll_period_us": 10000, 00:08:10.402 "nvme_ioq_poll_period_us": 0, 00:08:10.402 "io_queue_requests": 0, 00:08:10.402 "delay_cmd_submit": true, 00:08:10.402 "transport_retry_count": 4, 00:08:10.402 "bdev_retry_count": 3, 00:08:10.402 "transport_ack_timeout": 0, 00:08:10.402 "ctrlr_loss_timeout_sec": 0, 00:08:10.402 "reconnect_delay_sec": 0, 00:08:10.402 "fast_io_fail_timeout_sec": 0, 00:08:10.402 "disable_auto_failback": false, 00:08:10.402 "generate_uuids": false, 00:08:10.402 "transport_tos": 0, 00:08:10.402 "nvme_error_stat": false, 00:08:10.402 "rdma_srq_size": 0, 00:08:10.402 "io_path_stat": false, 00:08:10.402 "allow_accel_sequence": false, 00:08:10.402 "rdma_max_cq_size": 0, 00:08:10.402 "rdma_cm_event_timeout_ms": 0, 00:08:10.402 "dhchap_digests": [ 00:08:10.402 "sha256", 00:08:10.402 "sha384", 00:08:10.402 "sha512" 00:08:10.402 ], 00:08:10.402 "dhchap_dhgroups": [ 00:08:10.402 "null", 00:08:10.402 "ffdhe2048", 00:08:10.402 "ffdhe3072", 00:08:10.402 "ffdhe4096", 00:08:10.402 "ffdhe6144", 00:08:10.402 "ffdhe8192" 00:08:10.402 ] 00:08:10.402 } 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "method": "bdev_nvme_set_hotplug", 00:08:10.402 "params": { 00:08:10.402 "period_us": 100000, 00:08:10.402 "enable": false 00:08:10.402 } 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "method": "bdev_wait_for_examine" 00:08:10.402 } 00:08:10.402 ] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "scsi", 00:08:10.402 "config": null 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "scheduler", 00:08:10.402 "config": [ 00:08:10.402 { 00:08:10.402 "method": "framework_set_scheduler", 00:08:10.402 "params": { 00:08:10.402 "name": "static" 00:08:10.402 } 00:08:10.402 } 00:08:10.402 ] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "vhost_scsi", 00:08:10.402 "config": [] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "vhost_blk", 00:08:10.402 "config": [] 00:08:10.402 }, 00:08:10.402 { 00:08:10.402 "subsystem": "ublk", 00:08:10.402 "config": [] 00:08:10.402 }, 00:08:10.402 { 00:08:10.403 "subsystem": "nbd", 00:08:10.403 "config": [] 00:08:10.403 }, 00:08:10.403 { 00:08:10.403 "subsystem": "nvmf", 00:08:10.403 "config": [ 00:08:10.403 { 00:08:10.403 "method": "nvmf_set_config", 00:08:10.403 "params": { 00:08:10.403 "discovery_filter": "match_any", 00:08:10.403 "admin_cmd_passthru": { 00:08:10.403 "identify_ctrlr": false 00:08:10.403 }, 00:08:10.403 "dhchap_digests": [ 00:08:10.403 "sha256", 00:08:10.403 "sha384", 00:08:10.403 "sha512" 00:08:10.403 ], 00:08:10.403 "dhchap_dhgroups": [ 00:08:10.403 "null", 00:08:10.403 "ffdhe2048", 00:08:10.403 "ffdhe3072", 00:08:10.403 "ffdhe4096", 00:08:10.403 "ffdhe6144", 00:08:10.403 "ffdhe8192" 00:08:10.403 ] 00:08:10.403 } 00:08:10.403 }, 00:08:10.403 { 00:08:10.403 "method": "nvmf_set_max_subsystems", 00:08:10.403 "params": { 00:08:10.403 "max_subsystems": 1024 00:08:10.403 } 00:08:10.403 }, 00:08:10.403 { 00:08:10.403 "method": "nvmf_set_crdt", 00:08:10.403 "params": { 00:08:10.403 "crdt1": 0, 00:08:10.403 "crdt2": 0, 00:08:10.403 "crdt3": 0 00:08:10.403 } 00:08:10.403 }, 00:08:10.403 { 00:08:10.403 "method": "nvmf_create_transport", 00:08:10.403 "params": { 00:08:10.403 "trtype": "TCP", 00:08:10.403 "max_queue_depth": 128, 00:08:10.403 "max_io_qpairs_per_ctrlr": 127, 00:08:10.403 "in_capsule_data_size": 4096, 00:08:10.403 "max_io_size": 131072, 00:08:10.403 "io_unit_size": 131072, 00:08:10.403 "max_aq_depth": 128, 00:08:10.403 "num_shared_buffers": 511, 00:08:10.403 "buf_cache_size": 4294967295, 00:08:10.403 "dif_insert_or_strip": false, 00:08:10.403 "zcopy": false, 00:08:10.403 "c2h_success": true, 00:08:10.403 "sock_priority": 0, 00:08:10.403 "abort_timeout_sec": 1, 00:08:10.403 "ack_timeout": 0, 00:08:10.403 "data_wr_pool_size": 0 00:08:10.403 } 00:08:10.403 } 00:08:10.403 ] 00:08:10.403 }, 00:08:10.403 { 00:08:10.403 "subsystem": "iscsi", 00:08:10.403 "config": [ 00:08:10.403 { 00:08:10.403 "method": "iscsi_set_options", 00:08:10.403 "params": { 00:08:10.403 "node_base": "iqn.2016-06.io.spdk", 00:08:10.403 "max_sessions": 128, 00:08:10.403 "max_connections_per_session": 2, 00:08:10.403 "max_queue_depth": 64, 00:08:10.403 "default_time2wait": 2, 00:08:10.403 "default_time2retain": 20, 00:08:10.403 "first_burst_length": 8192, 00:08:10.403 "immediate_data": true, 00:08:10.403 "allow_duplicated_isid": false, 00:08:10.403 "error_recovery_level": 0, 00:08:10.403 "nop_timeout": 60, 00:08:10.403 "nop_in_interval": 30, 00:08:10.403 "disable_chap": false, 00:08:10.403 "require_chap": false, 00:08:10.403 "mutual_chap": false, 00:08:10.403 "chap_group": 0, 00:08:10.403 "max_large_datain_per_connection": 64, 00:08:10.403 "max_r2t_per_connection": 4, 00:08:10.403 "pdu_pool_size": 36864, 00:08:10.403 "immediate_data_pool_size": 16384, 00:08:10.403 "data_out_pool_size": 2048 00:08:10.403 } 00:08:10.403 } 00:08:10.403 ] 00:08:10.403 } 00:08:10.403 ] 00:08:10.403 } 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58373 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58373 ']' 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58373 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58373 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.403 killing process with pid 58373 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58373' 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58373 00:08:10.403 07:44:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58373 00:08:12.937 07:44:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58429 00:08:12.937 07:44:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:12.937 07:44:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58429 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58429 ']' 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58429 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58429 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.217 killing process with pid 58429 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58429' 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58429 00:08:18.217 07:44:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58429 00:08:20.120 07:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:20.120 07:44:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:20.120 00:08:20.120 real 0m11.330s 00:08:20.120 user 0m10.686s 00:08:20.120 sys 0m1.117s 00:08:20.120 07:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.120 ************************************ 00:08:20.120 07:44:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:20.120 END TEST skip_rpc_with_json 00:08:20.120 ************************************ 00:08:20.120 07:44:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:20.120 07:44:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.120 07:44:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.120 07:44:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.121 ************************************ 00:08:20.121 START TEST skip_rpc_with_delay 00:08:20.121 ************************************ 00:08:20.121 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:20.121 07:44:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:20.121 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:20.121 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:20.121 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:20.121 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.121 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:20.380 [2024-11-06 07:44:42.896011] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.380 00:08:20.380 real 0m0.223s 00:08:20.380 user 0m0.122s 00:08:20.380 sys 0m0.099s 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.380 ************************************ 00:08:20.380 END TEST skip_rpc_with_delay 00:08:20.380 07:44:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:20.380 ************************************ 00:08:20.642 07:44:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:20.642 07:44:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:20.642 07:44:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:20.642 07:44:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.642 07:44:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.642 07:44:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.642 ************************************ 00:08:20.642 START TEST exit_on_failed_rpc_init 00:08:20.642 ************************************ 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58563 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58563 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58563 ']' 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.642 07:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:20.642 [2024-11-06 07:44:43.135167] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:20.642 [2024-11-06 07:44:43.135382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58563 ] 00:08:20.908 [2024-11-06 07:44:43.314198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.908 [2024-11-06 07:44:43.455543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:21.845 07:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:22.104 [2024-11-06 07:44:44.508912] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:22.104 [2024-11-06 07:44:44.509662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58586 ] 00:08:22.104 [2024-11-06 07:44:44.705116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.362 [2024-11-06 07:44:44.863221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.362 [2024-11-06 07:44:44.863349] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:22.362 [2024-11-06 07:44:44.863372] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:22.362 [2024-11-06 07:44:44.863392] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58563 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58563 ']' 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58563 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58563 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.621 killing process with pid 58563 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58563' 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58563 00:08:22.621 07:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58563 00:08:25.191 00:08:25.191 real 0m4.512s 00:08:25.191 user 0m4.985s 00:08:25.191 sys 0m0.749s 00:08:25.191 07:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.191 ************************************ 00:08:25.191 END TEST exit_on_failed_rpc_init 00:08:25.191 ************************************ 00:08:25.191 07:44:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:25.191 07:44:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:25.191 00:08:25.191 real 0m23.790s 00:08:25.191 user 0m22.684s 00:08:25.191 sys 0m2.691s 00:08:25.191 07:44:47 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.191 07:44:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.191 ************************************ 00:08:25.191 END TEST skip_rpc 00:08:25.191 ************************************ 00:08:25.191 07:44:47 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:25.191 07:44:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.191 07:44:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.191 07:44:47 -- common/autotest_common.sh@10 -- # set +x 00:08:25.191 ************************************ 00:08:25.191 START TEST rpc_client 00:08:25.191 ************************************ 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:25.191 * Looking for test storage... 00:08:25.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.191 07:44:47 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:25.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.191 --rc genhtml_branch_coverage=1 00:08:25.191 --rc genhtml_function_coverage=1 00:08:25.191 --rc genhtml_legend=1 00:08:25.191 --rc geninfo_all_blocks=1 00:08:25.191 --rc geninfo_unexecuted_blocks=1 00:08:25.191 00:08:25.191 ' 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:25.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.191 --rc genhtml_branch_coverage=1 00:08:25.191 --rc genhtml_function_coverage=1 00:08:25.191 --rc genhtml_legend=1 00:08:25.191 --rc geninfo_all_blocks=1 00:08:25.191 --rc geninfo_unexecuted_blocks=1 00:08:25.191 00:08:25.191 ' 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:25.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.191 --rc genhtml_branch_coverage=1 00:08:25.191 --rc genhtml_function_coverage=1 00:08:25.191 --rc genhtml_legend=1 00:08:25.191 --rc geninfo_all_blocks=1 00:08:25.191 --rc geninfo_unexecuted_blocks=1 00:08:25.191 00:08:25.191 ' 00:08:25.191 07:44:47 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:25.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.191 --rc genhtml_branch_coverage=1 00:08:25.191 --rc genhtml_function_coverage=1 00:08:25.191 --rc genhtml_legend=1 00:08:25.191 --rc geninfo_all_blocks=1 00:08:25.191 --rc geninfo_unexecuted_blocks=1 00:08:25.191 00:08:25.191 ' 00:08:25.191 07:44:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:25.450 OK 00:08:25.450 07:44:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:25.450 00:08:25.450 real 0m0.258s 00:08:25.450 user 0m0.152s 00:08:25.450 sys 0m0.117s 00:08:25.450 07:44:47 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.450 07:44:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:25.450 ************************************ 00:08:25.450 END TEST rpc_client 00:08:25.450 ************************************ 00:08:25.450 07:44:47 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:25.450 07:44:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.450 07:44:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.450 07:44:47 -- common/autotest_common.sh@10 -- # set +x 00:08:25.450 ************************************ 00:08:25.450 START TEST json_config 00:08:25.450 ************************************ 00:08:25.450 07:44:47 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:25.450 07:44:47 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:25.450 07:44:47 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:08:25.450 07:44:47 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:25.709 07:44:48 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:25.709 07:44:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.709 07:44:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.709 07:44:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.709 07:44:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.709 07:44:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.709 07:44:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.709 07:44:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.709 07:44:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.709 07:44:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.709 07:44:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.709 07:44:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.709 07:44:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:25.709 07:44:48 json_config -- scripts/common.sh@345 -- # : 1 00:08:25.709 07:44:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.709 07:44:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.709 07:44:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:25.709 07:44:48 json_config -- scripts/common.sh@353 -- # local d=1 00:08:25.709 07:44:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.709 07:44:48 json_config -- scripts/common.sh@355 -- # echo 1 00:08:25.709 07:44:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.709 07:44:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:25.709 07:44:48 json_config -- scripts/common.sh@353 -- # local d=2 00:08:25.709 07:44:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.709 07:44:48 json_config -- scripts/common.sh@355 -- # echo 2 00:08:25.709 07:44:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.709 07:44:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.709 07:44:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.709 07:44:48 json_config -- scripts/common.sh@368 -- # return 0 00:08:25.709 07:44:48 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.709 07:44:48 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.709 --rc genhtml_branch_coverage=1 00:08:25.709 --rc genhtml_function_coverage=1 00:08:25.709 --rc genhtml_legend=1 00:08:25.709 --rc geninfo_all_blocks=1 00:08:25.709 --rc geninfo_unexecuted_blocks=1 00:08:25.709 00:08:25.709 ' 00:08:25.709 07:44:48 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.709 --rc genhtml_branch_coverage=1 00:08:25.709 --rc genhtml_function_coverage=1 00:08:25.709 --rc genhtml_legend=1 00:08:25.709 --rc geninfo_all_blocks=1 00:08:25.709 --rc geninfo_unexecuted_blocks=1 00:08:25.709 00:08:25.709 ' 00:08:25.709 07:44:48 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.709 --rc genhtml_branch_coverage=1 00:08:25.709 --rc genhtml_function_coverage=1 00:08:25.709 --rc genhtml_legend=1 00:08:25.709 --rc geninfo_all_blocks=1 00:08:25.709 --rc geninfo_unexecuted_blocks=1 00:08:25.709 00:08:25.709 ' 00:08:25.709 07:44:48 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:25.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.709 --rc genhtml_branch_coverage=1 00:08:25.709 --rc genhtml_function_coverage=1 00:08:25.709 --rc genhtml_legend=1 00:08:25.709 --rc geninfo_all_blocks=1 00:08:25.709 --rc geninfo_unexecuted_blocks=1 00:08:25.709 00:08:25.709 ' 00:08:25.709 07:44:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ac525bc-2596-4ce9-9d20-0a718625d8cf 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8ac525bc-2596-4ce9-9d20-0a718625d8cf 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.709 07:44:48 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:25.709 07:44:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.709 07:44:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.709 07:44:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.709 07:44:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.710 07:44:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.710 07:44:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.710 07:44:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.710 07:44:48 json_config -- paths/export.sh@5 -- # export PATH 00:08:25.710 07:44:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@51 -- # : 0 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.710 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.710 07:44:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.710 07:44:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:25.710 07:44:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:25.710 07:44:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:25.710 07:44:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:25.710 07:44:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:25.710 07:44:48 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:25.710 WARNING: No tests are enabled so not running JSON configuration tests 00:08:25.710 07:44:48 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:25.710 00:08:25.710 real 0m0.191s 00:08:25.710 user 0m0.116s 00:08:25.710 sys 0m0.072s 00:08:25.710 07:44:48 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.710 ************************************ 00:08:25.710 END TEST json_config 00:08:25.710 ************************************ 00:08:25.710 07:44:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:25.710 07:44:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:25.710 07:44:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.710 07:44:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.710 07:44:48 -- common/autotest_common.sh@10 -- # set +x 00:08:25.710 ************************************ 00:08:25.710 START TEST json_config_extra_key 00:08:25.710 ************************************ 00:08:25.710 07:44:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:25.710 07:44:48 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:25.710 07:44:48 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:25.710 07:44:48 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:08:25.970 07:44:48 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.970 07:44:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:25.970 07:44:48 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.970 07:44:48 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:25.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.970 --rc genhtml_branch_coverage=1 00:08:25.970 --rc genhtml_function_coverage=1 00:08:25.970 --rc genhtml_legend=1 00:08:25.970 --rc geninfo_all_blocks=1 00:08:25.970 --rc geninfo_unexecuted_blocks=1 00:08:25.970 00:08:25.970 ' 00:08:25.970 07:44:48 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:25.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.970 --rc genhtml_branch_coverage=1 00:08:25.970 --rc genhtml_function_coverage=1 00:08:25.970 --rc genhtml_legend=1 00:08:25.970 --rc geninfo_all_blocks=1 00:08:25.970 --rc geninfo_unexecuted_blocks=1 00:08:25.970 00:08:25.970 ' 00:08:25.970 07:44:48 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:25.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.970 --rc genhtml_branch_coverage=1 00:08:25.970 --rc genhtml_function_coverage=1 00:08:25.970 --rc genhtml_legend=1 00:08:25.970 --rc geninfo_all_blocks=1 00:08:25.970 --rc geninfo_unexecuted_blocks=1 00:08:25.970 00:08:25.970 ' 00:08:25.970 07:44:48 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:25.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.970 --rc genhtml_branch_coverage=1 00:08:25.970 --rc genhtml_function_coverage=1 00:08:25.970 --rc genhtml_legend=1 00:08:25.970 --rc geninfo_all_blocks=1 00:08:25.970 --rc geninfo_unexecuted_blocks=1 00:08:25.970 00:08:25.970 ' 00:08:25.970 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8ac525bc-2596-4ce9-9d20-0a718625d8cf 00:08:25.970 07:44:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8ac525bc-2596-4ce9-9d20-0a718625d8cf 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:25.971 07:44:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.971 07:44:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.971 07:44:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.971 07:44:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.971 07:44:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.971 07:44:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.971 07:44:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.971 07:44:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:25.971 07:44:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.971 07:44:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:25.971 INFO: launching applications... 00:08:25.971 07:44:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58796 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:25.971 Waiting for target to run... 00:08:25.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58796 /var/tmp/spdk_tgt.sock 00:08:25.971 07:44:48 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58796 ']' 00:08:25.971 07:44:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:25.971 07:44:48 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:25.971 07:44:48 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.971 07:44:48 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:25.971 07:44:48 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.971 07:44:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:25.971 [2024-11-06 07:44:48.536649] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:25.971 [2024-11-06 07:44:48.536881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58796 ] 00:08:26.538 [2024-11-06 07:44:49.044451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.797 [2024-11-06 07:44:49.193332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.365 00:08:27.365 INFO: shutting down applications... 00:08:27.365 07:44:49 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.365 07:44:49 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:27.365 07:44:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:27.365 07:44:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58796 ]] 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58796 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58796 00:08:27.365 07:44:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:27.931 07:44:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:27.931 07:44:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:27.931 07:44:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58796 00:08:27.931 07:44:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:28.498 07:44:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:28.498 07:44:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:28.498 07:44:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58796 00:08:28.498 07:44:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:29.066 07:44:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:29.066 07:44:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.066 07:44:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58796 00:08:29.066 07:44:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:29.636 07:44:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:29.636 07:44:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.636 07:44:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58796 00:08:29.636 07:44:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:29.895 07:44:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:29.895 07:44:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:29.895 07:44:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58796 00:08:29.895 07:44:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:30.462 07:44:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:30.462 07:44:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:30.462 SPDK target shutdown done 00:08:30.462 Success 00:08:30.462 07:44:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58796 00:08:30.462 07:44:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:30.462 07:44:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:30.462 07:44:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:30.462 07:44:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:30.462 07:44:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:30.462 ************************************ 00:08:30.462 END TEST json_config_extra_key 00:08:30.462 ************************************ 00:08:30.462 00:08:30.462 real 0m4.799s 00:08:30.462 user 0m4.275s 00:08:30.462 sys 0m0.714s 00:08:30.462 07:44:52 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.462 07:44:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:30.462 07:44:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:30.462 07:44:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.462 07:44:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.462 07:44:53 -- common/autotest_common.sh@10 -- # set +x 00:08:30.462 ************************************ 00:08:30.462 START TEST alias_rpc 00:08:30.462 ************************************ 00:08:30.462 07:44:53 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:30.721 * Looking for test storage... 00:08:30.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.721 07:44:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:30.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.721 --rc genhtml_branch_coverage=1 00:08:30.721 --rc genhtml_function_coverage=1 00:08:30.721 --rc genhtml_legend=1 00:08:30.721 --rc geninfo_all_blocks=1 00:08:30.721 --rc geninfo_unexecuted_blocks=1 00:08:30.721 00:08:30.721 ' 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:30.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.721 --rc genhtml_branch_coverage=1 00:08:30.721 --rc genhtml_function_coverage=1 00:08:30.721 --rc genhtml_legend=1 00:08:30.721 --rc geninfo_all_blocks=1 00:08:30.721 --rc geninfo_unexecuted_blocks=1 00:08:30.721 00:08:30.721 ' 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:30.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.721 --rc genhtml_branch_coverage=1 00:08:30.721 --rc genhtml_function_coverage=1 00:08:30.721 --rc genhtml_legend=1 00:08:30.721 --rc geninfo_all_blocks=1 00:08:30.721 --rc geninfo_unexecuted_blocks=1 00:08:30.721 00:08:30.721 ' 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:30.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.721 --rc genhtml_branch_coverage=1 00:08:30.721 --rc genhtml_function_coverage=1 00:08:30.721 --rc genhtml_legend=1 00:08:30.721 --rc geninfo_all_blocks=1 00:08:30.721 --rc geninfo_unexecuted_blocks=1 00:08:30.721 00:08:30.721 ' 00:08:30.721 07:44:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:30.721 07:44:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58908 00:08:30.721 07:44:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58908 00:08:30.721 07:44:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58908 ']' 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.721 07:44:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.980 [2024-11-06 07:44:53.351877] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:30.980 [2024-11-06 07:44:53.352804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58908 ] 00:08:30.980 [2024-11-06 07:44:53.548437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.238 [2024-11-06 07:44:53.711491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.174 07:44:54 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.174 07:44:54 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:32.174 07:44:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:32.432 07:44:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58908 00:08:32.432 07:44:54 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58908 ']' 00:08:32.432 07:44:54 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58908 00:08:32.432 07:44:54 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:32.432 07:44:54 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.432 07:44:54 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58908 00:08:32.432 killing process with pid 58908 00:08:32.432 07:44:55 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.432 07:44:55 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.432 07:44:55 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58908' 00:08:32.432 07:44:55 alias_rpc -- common/autotest_common.sh@969 -- # kill 58908 00:08:32.432 07:44:55 alias_rpc -- common/autotest_common.sh@974 -- # wait 58908 00:08:34.965 ************************************ 00:08:34.965 END TEST alias_rpc 00:08:34.965 ************************************ 00:08:34.965 00:08:34.965 real 0m4.378s 00:08:34.965 user 0m4.587s 00:08:34.965 sys 0m0.667s 00:08:34.965 07:44:57 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.965 07:44:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.965 07:44:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:34.965 07:44:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:34.965 07:44:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.965 07:44:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.965 07:44:57 -- common/autotest_common.sh@10 -- # set +x 00:08:34.965 ************************************ 00:08:34.965 START TEST spdkcli_tcp 00:08:34.965 ************************************ 00:08:34.965 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:34.965 * Looking for test storage... 00:08:34.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:34.965 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:34.965 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:08:34.965 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.224 07:44:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:35.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.224 --rc genhtml_branch_coverage=1 00:08:35.224 --rc genhtml_function_coverage=1 00:08:35.224 --rc genhtml_legend=1 00:08:35.224 --rc geninfo_all_blocks=1 00:08:35.224 --rc geninfo_unexecuted_blocks=1 00:08:35.224 00:08:35.224 ' 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:35.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.224 --rc genhtml_branch_coverage=1 00:08:35.224 --rc genhtml_function_coverage=1 00:08:35.224 --rc genhtml_legend=1 00:08:35.224 --rc geninfo_all_blocks=1 00:08:35.224 --rc geninfo_unexecuted_blocks=1 00:08:35.224 00:08:35.224 ' 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:35.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.224 --rc genhtml_branch_coverage=1 00:08:35.224 --rc genhtml_function_coverage=1 00:08:35.224 --rc genhtml_legend=1 00:08:35.224 --rc geninfo_all_blocks=1 00:08:35.224 --rc geninfo_unexecuted_blocks=1 00:08:35.224 00:08:35.224 ' 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:35.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.224 --rc genhtml_branch_coverage=1 00:08:35.224 --rc genhtml_function_coverage=1 00:08:35.224 --rc genhtml_legend=1 00:08:35.224 --rc geninfo_all_blocks=1 00:08:35.224 --rc geninfo_unexecuted_blocks=1 00:08:35.224 00:08:35.224 ' 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59020 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59020 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59020 ']' 00:08:35.224 07:44:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.224 07:44:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.224 [2024-11-06 07:44:57.794097] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:35.224 [2024-11-06 07:44:57.794345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59020 ] 00:08:35.483 [2024-11-06 07:44:57.983260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:35.741 [2024-11-06 07:44:58.125323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.741 [2024-11-06 07:44:58.125342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.710 07:44:59 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.710 07:44:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:36.710 07:44:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59037 00:08:36.710 07:44:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:36.710 07:44:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:36.969 [ 00:08:36.969 "bdev_malloc_delete", 00:08:36.969 "bdev_malloc_create", 00:08:36.969 "bdev_null_resize", 00:08:36.969 "bdev_null_delete", 00:08:36.969 "bdev_null_create", 00:08:36.969 "bdev_nvme_cuse_unregister", 00:08:36.969 "bdev_nvme_cuse_register", 00:08:36.969 "bdev_opal_new_user", 00:08:36.969 "bdev_opal_set_lock_state", 00:08:36.969 "bdev_opal_delete", 00:08:36.969 "bdev_opal_get_info", 00:08:36.969 "bdev_opal_create", 00:08:36.969 "bdev_nvme_opal_revert", 00:08:36.969 "bdev_nvme_opal_init", 00:08:36.969 "bdev_nvme_send_cmd", 00:08:36.969 "bdev_nvme_set_keys", 00:08:36.969 "bdev_nvme_get_path_iostat", 00:08:36.969 "bdev_nvme_get_mdns_discovery_info", 00:08:36.969 "bdev_nvme_stop_mdns_discovery", 00:08:36.969 "bdev_nvme_start_mdns_discovery", 00:08:36.969 "bdev_nvme_set_multipath_policy", 00:08:36.969 "bdev_nvme_set_preferred_path", 00:08:36.969 "bdev_nvme_get_io_paths", 00:08:36.969 "bdev_nvme_remove_error_injection", 00:08:36.969 "bdev_nvme_add_error_injection", 00:08:36.969 "bdev_nvme_get_discovery_info", 00:08:36.969 "bdev_nvme_stop_discovery", 00:08:36.969 "bdev_nvme_start_discovery", 00:08:36.969 "bdev_nvme_get_controller_health_info", 00:08:36.969 "bdev_nvme_disable_controller", 00:08:36.969 "bdev_nvme_enable_controller", 00:08:36.969 "bdev_nvme_reset_controller", 00:08:36.969 "bdev_nvme_get_transport_statistics", 00:08:36.969 "bdev_nvme_apply_firmware", 00:08:36.969 "bdev_nvme_detach_controller", 00:08:36.969 "bdev_nvme_get_controllers", 00:08:36.969 "bdev_nvme_attach_controller", 00:08:36.969 "bdev_nvme_set_hotplug", 00:08:36.969 "bdev_nvme_set_options", 00:08:36.969 "bdev_passthru_delete", 00:08:36.969 "bdev_passthru_create", 00:08:36.969 "bdev_lvol_set_parent_bdev", 00:08:36.969 "bdev_lvol_set_parent", 00:08:36.969 "bdev_lvol_check_shallow_copy", 00:08:36.969 "bdev_lvol_start_shallow_copy", 00:08:36.969 "bdev_lvol_grow_lvstore", 00:08:36.969 "bdev_lvol_get_lvols", 00:08:36.969 "bdev_lvol_get_lvstores", 00:08:36.969 "bdev_lvol_delete", 00:08:36.969 "bdev_lvol_set_read_only", 00:08:36.969 "bdev_lvol_resize", 00:08:36.969 "bdev_lvol_decouple_parent", 00:08:36.969 "bdev_lvol_inflate", 00:08:36.969 "bdev_lvol_rename", 00:08:36.969 "bdev_lvol_clone_bdev", 00:08:36.969 "bdev_lvol_clone", 00:08:36.969 "bdev_lvol_snapshot", 00:08:36.969 "bdev_lvol_create", 00:08:36.969 "bdev_lvol_delete_lvstore", 00:08:36.969 "bdev_lvol_rename_lvstore", 00:08:36.969 "bdev_lvol_create_lvstore", 00:08:36.969 "bdev_raid_set_options", 00:08:36.969 "bdev_raid_remove_base_bdev", 00:08:36.969 "bdev_raid_add_base_bdev", 00:08:36.969 "bdev_raid_delete", 00:08:36.969 "bdev_raid_create", 00:08:36.969 "bdev_raid_get_bdevs", 00:08:36.969 "bdev_error_inject_error", 00:08:36.969 "bdev_error_delete", 00:08:36.969 "bdev_error_create", 00:08:36.969 "bdev_split_delete", 00:08:36.969 "bdev_split_create", 00:08:36.969 "bdev_delay_delete", 00:08:36.969 "bdev_delay_create", 00:08:36.969 "bdev_delay_update_latency", 00:08:36.969 "bdev_zone_block_delete", 00:08:36.969 "bdev_zone_block_create", 00:08:36.969 "blobfs_create", 00:08:36.969 "blobfs_detect", 00:08:36.969 "blobfs_set_cache_size", 00:08:36.969 "bdev_xnvme_delete", 00:08:36.969 "bdev_xnvme_create", 00:08:36.969 "bdev_aio_delete", 00:08:36.969 "bdev_aio_rescan", 00:08:36.969 "bdev_aio_create", 00:08:36.969 "bdev_ftl_set_property", 00:08:36.969 "bdev_ftl_get_properties", 00:08:36.969 "bdev_ftl_get_stats", 00:08:36.969 "bdev_ftl_unmap", 00:08:36.969 "bdev_ftl_unload", 00:08:36.969 "bdev_ftl_delete", 00:08:36.969 "bdev_ftl_load", 00:08:36.969 "bdev_ftl_create", 00:08:36.969 "bdev_virtio_attach_controller", 00:08:36.969 "bdev_virtio_scsi_get_devices", 00:08:36.969 "bdev_virtio_detach_controller", 00:08:36.969 "bdev_virtio_blk_set_hotplug", 00:08:36.969 "bdev_iscsi_delete", 00:08:36.969 "bdev_iscsi_create", 00:08:36.969 "bdev_iscsi_set_options", 00:08:36.969 "accel_error_inject_error", 00:08:36.969 "ioat_scan_accel_module", 00:08:36.969 "dsa_scan_accel_module", 00:08:36.969 "iaa_scan_accel_module", 00:08:36.969 "keyring_file_remove_key", 00:08:36.969 "keyring_file_add_key", 00:08:36.969 "keyring_linux_set_options", 00:08:36.969 "fsdev_aio_delete", 00:08:36.969 "fsdev_aio_create", 00:08:36.969 "iscsi_get_histogram", 00:08:36.969 "iscsi_enable_histogram", 00:08:36.969 "iscsi_set_options", 00:08:36.969 "iscsi_get_auth_groups", 00:08:36.969 "iscsi_auth_group_remove_secret", 00:08:36.969 "iscsi_auth_group_add_secret", 00:08:36.969 "iscsi_delete_auth_group", 00:08:36.969 "iscsi_create_auth_group", 00:08:36.969 "iscsi_set_discovery_auth", 00:08:36.969 "iscsi_get_options", 00:08:36.969 "iscsi_target_node_request_logout", 00:08:36.969 "iscsi_target_node_set_redirect", 00:08:36.969 "iscsi_target_node_set_auth", 00:08:36.969 "iscsi_target_node_add_lun", 00:08:36.969 "iscsi_get_stats", 00:08:36.969 "iscsi_get_connections", 00:08:36.969 "iscsi_portal_group_set_auth", 00:08:36.969 "iscsi_start_portal_group", 00:08:36.969 "iscsi_delete_portal_group", 00:08:36.969 "iscsi_create_portal_group", 00:08:36.969 "iscsi_get_portal_groups", 00:08:36.969 "iscsi_delete_target_node", 00:08:36.969 "iscsi_target_node_remove_pg_ig_maps", 00:08:36.969 "iscsi_target_node_add_pg_ig_maps", 00:08:36.969 "iscsi_create_target_node", 00:08:36.969 "iscsi_get_target_nodes", 00:08:36.969 "iscsi_delete_initiator_group", 00:08:36.969 "iscsi_initiator_group_remove_initiators", 00:08:36.969 "iscsi_initiator_group_add_initiators", 00:08:36.969 "iscsi_create_initiator_group", 00:08:36.969 "iscsi_get_initiator_groups", 00:08:36.969 "nvmf_set_crdt", 00:08:36.969 "nvmf_set_config", 00:08:36.969 "nvmf_set_max_subsystems", 00:08:36.969 "nvmf_stop_mdns_prr", 00:08:36.969 "nvmf_publish_mdns_prr", 00:08:36.969 "nvmf_subsystem_get_listeners", 00:08:36.969 "nvmf_subsystem_get_qpairs", 00:08:36.969 "nvmf_subsystem_get_controllers", 00:08:36.969 "nvmf_get_stats", 00:08:36.969 "nvmf_get_transports", 00:08:36.969 "nvmf_create_transport", 00:08:36.969 "nvmf_get_targets", 00:08:36.969 "nvmf_delete_target", 00:08:36.969 "nvmf_create_target", 00:08:36.969 "nvmf_subsystem_allow_any_host", 00:08:36.969 "nvmf_subsystem_set_keys", 00:08:36.969 "nvmf_subsystem_remove_host", 00:08:36.969 "nvmf_subsystem_add_host", 00:08:36.969 "nvmf_ns_remove_host", 00:08:36.969 "nvmf_ns_add_host", 00:08:36.969 "nvmf_subsystem_remove_ns", 00:08:36.969 "nvmf_subsystem_set_ns_ana_group", 00:08:36.969 "nvmf_subsystem_add_ns", 00:08:36.969 "nvmf_subsystem_listener_set_ana_state", 00:08:36.969 "nvmf_discovery_get_referrals", 00:08:36.969 "nvmf_discovery_remove_referral", 00:08:36.969 "nvmf_discovery_add_referral", 00:08:36.969 "nvmf_subsystem_remove_listener", 00:08:36.969 "nvmf_subsystem_add_listener", 00:08:36.969 "nvmf_delete_subsystem", 00:08:36.969 "nvmf_create_subsystem", 00:08:36.969 "nvmf_get_subsystems", 00:08:36.969 "env_dpdk_get_mem_stats", 00:08:36.969 "nbd_get_disks", 00:08:36.969 "nbd_stop_disk", 00:08:36.970 "nbd_start_disk", 00:08:36.970 "ublk_recover_disk", 00:08:36.970 "ublk_get_disks", 00:08:36.970 "ublk_stop_disk", 00:08:36.970 "ublk_start_disk", 00:08:36.970 "ublk_destroy_target", 00:08:36.970 "ublk_create_target", 00:08:36.970 "virtio_blk_create_transport", 00:08:36.970 "virtio_blk_get_transports", 00:08:36.970 "vhost_controller_set_coalescing", 00:08:36.970 "vhost_get_controllers", 00:08:36.970 "vhost_delete_controller", 00:08:36.970 "vhost_create_blk_controller", 00:08:36.970 "vhost_scsi_controller_remove_target", 00:08:36.970 "vhost_scsi_controller_add_target", 00:08:36.970 "vhost_start_scsi_controller", 00:08:36.970 "vhost_create_scsi_controller", 00:08:36.970 "thread_set_cpumask", 00:08:36.970 "scheduler_set_options", 00:08:36.970 "framework_get_governor", 00:08:36.970 "framework_get_scheduler", 00:08:36.970 "framework_set_scheduler", 00:08:36.970 "framework_get_reactors", 00:08:36.970 "thread_get_io_channels", 00:08:36.970 "thread_get_pollers", 00:08:36.970 "thread_get_stats", 00:08:36.970 "framework_monitor_context_switch", 00:08:36.970 "spdk_kill_instance", 00:08:36.970 "log_enable_timestamps", 00:08:36.970 "log_get_flags", 00:08:36.970 "log_clear_flag", 00:08:36.970 "log_set_flag", 00:08:36.970 "log_get_level", 00:08:36.970 "log_set_level", 00:08:36.970 "log_get_print_level", 00:08:36.970 "log_set_print_level", 00:08:36.970 "framework_enable_cpumask_locks", 00:08:36.970 "framework_disable_cpumask_locks", 00:08:36.970 "framework_wait_init", 00:08:36.970 "framework_start_init", 00:08:36.970 "scsi_get_devices", 00:08:36.970 "bdev_get_histogram", 00:08:36.970 "bdev_enable_histogram", 00:08:36.970 "bdev_set_qos_limit", 00:08:36.970 "bdev_set_qd_sampling_period", 00:08:36.970 "bdev_get_bdevs", 00:08:36.970 "bdev_reset_iostat", 00:08:36.970 "bdev_get_iostat", 00:08:36.970 "bdev_examine", 00:08:36.970 "bdev_wait_for_examine", 00:08:36.970 "bdev_set_options", 00:08:36.970 "accel_get_stats", 00:08:36.970 "accel_set_options", 00:08:36.970 "accel_set_driver", 00:08:36.970 "accel_crypto_key_destroy", 00:08:36.970 "accel_crypto_keys_get", 00:08:36.970 "accel_crypto_key_create", 00:08:36.970 "accel_assign_opc", 00:08:36.970 "accel_get_module_info", 00:08:36.970 "accel_get_opc_assignments", 00:08:36.970 "vmd_rescan", 00:08:36.970 "vmd_remove_device", 00:08:36.970 "vmd_enable", 00:08:36.970 "sock_get_default_impl", 00:08:36.970 "sock_set_default_impl", 00:08:36.970 "sock_impl_set_options", 00:08:36.970 "sock_impl_get_options", 00:08:36.970 "iobuf_get_stats", 00:08:36.970 "iobuf_set_options", 00:08:36.970 "keyring_get_keys", 00:08:36.970 "framework_get_pci_devices", 00:08:36.970 "framework_get_config", 00:08:36.970 "framework_get_subsystems", 00:08:36.970 "fsdev_set_opts", 00:08:36.970 "fsdev_get_opts", 00:08:36.970 "trace_get_info", 00:08:36.970 "trace_get_tpoint_group_mask", 00:08:36.970 "trace_disable_tpoint_group", 00:08:36.970 "trace_enable_tpoint_group", 00:08:36.970 "trace_clear_tpoint_mask", 00:08:36.970 "trace_set_tpoint_mask", 00:08:36.970 "notify_get_notifications", 00:08:36.970 "notify_get_types", 00:08:36.970 "spdk_get_version", 00:08:36.970 "rpc_get_methods" 00:08:36.970 ] 00:08:36.970 07:44:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.970 07:44:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:36.970 07:44:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59020 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59020 ']' 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59020 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59020 00:08:36.970 killing process with pid 59020 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59020' 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59020 00:08:36.970 07:44:59 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59020 00:08:39.508 ************************************ 00:08:39.508 END TEST spdkcli_tcp 00:08:39.508 ************************************ 00:08:39.508 00:08:39.508 real 0m4.368s 00:08:39.508 user 0m8.009s 00:08:39.508 sys 0m0.688s 00:08:39.508 07:45:01 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.508 07:45:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.508 07:45:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:39.508 07:45:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.508 07:45:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.508 07:45:01 -- common/autotest_common.sh@10 -- # set +x 00:08:39.508 ************************************ 00:08:39.508 START TEST dpdk_mem_utility 00:08:39.508 ************************************ 00:08:39.508 07:45:01 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:39.508 * Looking for test storage... 00:08:39.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:39.508 07:45:01 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:39.508 07:45:01 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:08:39.508 07:45:01 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:39.508 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.509 07:45:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.509 --rc genhtml_branch_coverage=1 00:08:39.509 --rc genhtml_function_coverage=1 00:08:39.509 --rc genhtml_legend=1 00:08:39.509 --rc geninfo_all_blocks=1 00:08:39.509 --rc geninfo_unexecuted_blocks=1 00:08:39.509 00:08:39.509 ' 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.509 --rc genhtml_branch_coverage=1 00:08:39.509 --rc genhtml_function_coverage=1 00:08:39.509 --rc genhtml_legend=1 00:08:39.509 --rc geninfo_all_blocks=1 00:08:39.509 --rc geninfo_unexecuted_blocks=1 00:08:39.509 00:08:39.509 ' 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.509 --rc genhtml_branch_coverage=1 00:08:39.509 --rc genhtml_function_coverage=1 00:08:39.509 --rc genhtml_legend=1 00:08:39.509 --rc geninfo_all_blocks=1 00:08:39.509 --rc geninfo_unexecuted_blocks=1 00:08:39.509 00:08:39.509 ' 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.509 --rc genhtml_branch_coverage=1 00:08:39.509 --rc genhtml_function_coverage=1 00:08:39.509 --rc genhtml_legend=1 00:08:39.509 --rc geninfo_all_blocks=1 00:08:39.509 --rc geninfo_unexecuted_blocks=1 00:08:39.509 00:08:39.509 ' 00:08:39.509 07:45:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:39.509 07:45:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59142 00:08:39.509 07:45:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:39.509 07:45:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59142 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59142 ']' 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.509 07:45:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:39.768 [2024-11-06 07:45:02.242163] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:39.768 [2024-11-06 07:45:02.242395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59142 ] 00:08:40.026 [2024-11-06 07:45:02.437759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.026 [2024-11-06 07:45:02.608373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.961 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.961 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:40.961 07:45:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:40.961 07:45:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:40.961 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.961 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:40.961 { 00:08:40.961 "filename": "/tmp/spdk_mem_dump.txt" 00:08:40.961 } 00:08:40.961 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.961 07:45:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:41.221 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:41.221 1 heaps totaling size 824.000000 MiB 00:08:41.221 size: 824.000000 MiB heap id: 0 00:08:41.221 end heaps---------- 00:08:41.221 9 mempools totaling size 603.782043 MiB 00:08:41.221 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:41.222 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:41.222 size: 100.555481 MiB name: bdev_io_59142 00:08:41.222 size: 50.003479 MiB name: msgpool_59142 00:08:41.222 size: 36.509338 MiB name: fsdev_io_59142 00:08:41.222 size: 21.763794 MiB name: PDU_Pool 00:08:41.222 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:41.222 size: 4.133484 MiB name: evtpool_59142 00:08:41.222 size: 0.026123 MiB name: Session_Pool 00:08:41.222 end mempools------- 00:08:41.222 6 memzones totaling size 4.142822 MiB 00:08:41.222 size: 1.000366 MiB name: RG_ring_0_59142 00:08:41.222 size: 1.000366 MiB name: RG_ring_1_59142 00:08:41.222 size: 1.000366 MiB name: RG_ring_4_59142 00:08:41.222 size: 1.000366 MiB name: RG_ring_5_59142 00:08:41.222 size: 0.125366 MiB name: RG_ring_2_59142 00:08:41.222 size: 0.015991 MiB name: RG_ring_3_59142 00:08:41.222 end memzones------- 00:08:41.222 07:45:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:41.222 heap id: 0 total size: 824.000000 MiB number of busy elements: 311 number of free elements: 18 00:08:41.222 list of free elements. size: 16.782349 MiB 00:08:41.222 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:41.222 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:41.222 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:41.222 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:41.222 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:41.222 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:41.222 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:41.222 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:41.222 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:41.222 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:41.222 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:41.222 element at address: 0x20001b400000 with size: 0.563904 MiB 00:08:41.222 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:41.222 element at address: 0x200019600000 with size: 0.487976 MiB 00:08:41.222 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:41.222 element at address: 0x200012c00000 with size: 0.433228 MiB 00:08:41.222 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:41.222 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:41.222 list of standard malloc elements. size: 199.286743 MiB 00:08:41.222 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:41.222 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:41.222 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:41.222 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:41.222 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:41.222 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:41.222 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:41.222 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:41.222 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:41.222 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:41.222 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:41.222 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:41.222 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:41.222 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:41.222 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:41.223 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:41.223 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:41.224 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:41.224 list of memzone associated elements. size: 607.930908 MiB 00:08:41.224 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:41.224 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:41.224 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:41.224 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:41.224 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:41.224 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59142_0 00:08:41.224 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:41.224 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59142_0 00:08:41.224 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:41.224 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59142_0 00:08:41.224 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:41.224 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:41.224 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:41.224 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:41.224 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:41.224 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59142_0 00:08:41.224 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:41.224 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59142 00:08:41.224 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:41.224 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59142 00:08:41.224 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:41.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:41.224 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:41.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:41.224 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:41.224 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:41.224 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:41.224 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:41.224 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:41.224 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59142 00:08:41.224 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:41.224 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59142 00:08:41.224 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:41.224 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59142 00:08:41.224 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:41.224 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59142 00:08:41.224 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:41.224 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59142 00:08:41.224 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:41.224 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59142 00:08:41.224 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:41.224 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:41.224 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:41.224 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:41.224 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:41.224 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:41.224 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:41.224 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59142 00:08:41.224 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:41.224 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59142 00:08:41.224 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:41.224 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:41.224 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:41.224 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:41.224 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:41.224 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59142 00:08:41.224 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:41.224 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:41.224 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:41.224 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59142 00:08:41.224 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:41.224 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59142 00:08:41.224 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:41.224 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59142 00:08:41.224 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:41.224 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:41.224 07:45:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:41.224 07:45:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59142 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59142 ']' 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59142 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59142 00:08:41.224 killing process with pid 59142 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59142' 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59142 00:08:41.224 07:45:03 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59142 00:08:43.758 ************************************ 00:08:43.758 END TEST dpdk_mem_utility 00:08:43.758 ************************************ 00:08:43.758 00:08:43.758 real 0m4.263s 00:08:43.758 user 0m4.230s 00:08:43.758 sys 0m0.688s 00:08:43.758 07:45:06 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.758 07:45:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:43.758 07:45:06 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:43.758 07:45:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.758 07:45:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.758 07:45:06 -- common/autotest_common.sh@10 -- # set +x 00:08:43.758 ************************************ 00:08:43.758 START TEST event 00:08:43.758 ************************************ 00:08:43.758 07:45:06 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:43.758 * Looking for test storage... 00:08:43.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:43.758 07:45:06 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:43.758 07:45:06 event -- common/autotest_common.sh@1689 -- # lcov --version 00:08:43.758 07:45:06 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:44.017 07:45:06 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:44.017 07:45:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.017 07:45:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.017 07:45:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.017 07:45:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.017 07:45:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.017 07:45:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.017 07:45:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.017 07:45:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.017 07:45:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.017 07:45:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.017 07:45:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.017 07:45:06 event -- scripts/common.sh@344 -- # case "$op" in 00:08:44.017 07:45:06 event -- scripts/common.sh@345 -- # : 1 00:08:44.017 07:45:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.017 07:45:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.017 07:45:06 event -- scripts/common.sh@365 -- # decimal 1 00:08:44.017 07:45:06 event -- scripts/common.sh@353 -- # local d=1 00:08:44.017 07:45:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.017 07:45:06 event -- scripts/common.sh@355 -- # echo 1 00:08:44.017 07:45:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.017 07:45:06 event -- scripts/common.sh@366 -- # decimal 2 00:08:44.017 07:45:06 event -- scripts/common.sh@353 -- # local d=2 00:08:44.017 07:45:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.017 07:45:06 event -- scripts/common.sh@355 -- # echo 2 00:08:44.017 07:45:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.017 07:45:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.017 07:45:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.017 07:45:06 event -- scripts/common.sh@368 -- # return 0 00:08:44.017 07:45:06 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.017 07:45:06 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:44.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.018 --rc genhtml_branch_coverage=1 00:08:44.018 --rc genhtml_function_coverage=1 00:08:44.018 --rc genhtml_legend=1 00:08:44.018 --rc geninfo_all_blocks=1 00:08:44.018 --rc geninfo_unexecuted_blocks=1 00:08:44.018 00:08:44.018 ' 00:08:44.018 07:45:06 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:44.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.018 --rc genhtml_branch_coverage=1 00:08:44.018 --rc genhtml_function_coverage=1 00:08:44.018 --rc genhtml_legend=1 00:08:44.018 --rc geninfo_all_blocks=1 00:08:44.018 --rc geninfo_unexecuted_blocks=1 00:08:44.018 00:08:44.018 ' 00:08:44.018 07:45:06 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:44.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.018 --rc genhtml_branch_coverage=1 00:08:44.018 --rc genhtml_function_coverage=1 00:08:44.018 --rc genhtml_legend=1 00:08:44.018 --rc geninfo_all_blocks=1 00:08:44.018 --rc geninfo_unexecuted_blocks=1 00:08:44.018 00:08:44.018 ' 00:08:44.018 07:45:06 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:44.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.018 --rc genhtml_branch_coverage=1 00:08:44.018 --rc genhtml_function_coverage=1 00:08:44.018 --rc genhtml_legend=1 00:08:44.018 --rc geninfo_all_blocks=1 00:08:44.018 --rc geninfo_unexecuted_blocks=1 00:08:44.018 00:08:44.018 ' 00:08:44.018 07:45:06 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:44.018 07:45:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:44.018 07:45:06 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:44.018 07:45:06 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:44.018 07:45:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.018 07:45:06 event -- common/autotest_common.sh@10 -- # set +x 00:08:44.018 ************************************ 00:08:44.018 START TEST event_perf 00:08:44.018 ************************************ 00:08:44.018 07:45:06 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:44.018 Running I/O for 1 seconds...[2024-11-06 07:45:06.477539] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:44.018 [2024-11-06 07:45:06.477875] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59250 ] 00:08:44.277 [2024-11-06 07:45:06.669793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.277 [2024-11-06 07:45:06.848114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.277 Running I/O for 1 seconds...[2024-11-06 07:45:06.848311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.277 [2024-11-06 07:45:06.848361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.277 [2024-11-06 07:45:06.848352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.653 00:08:45.653 lcore 0: 188204 00:08:45.653 lcore 1: 188204 00:08:45.653 lcore 2: 188205 00:08:45.653 lcore 3: 188204 00:08:45.653 done. 00:08:45.653 00:08:45.653 real 0m1.695s 00:08:45.653 user 0m4.429s 00:08:45.653 sys 0m0.136s 00:08:45.653 07:45:08 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.653 07:45:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:45.653 ************************************ 00:08:45.653 END TEST event_perf 00:08:45.653 ************************************ 00:08:45.653 07:45:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:45.653 07:45:08 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:45.653 07:45:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.653 07:45:08 event -- common/autotest_common.sh@10 -- # set +x 00:08:45.653 ************************************ 00:08:45.653 START TEST event_reactor 00:08:45.653 ************************************ 00:08:45.653 07:45:08 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:45.653 [2024-11-06 07:45:08.226133] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:45.653 [2024-11-06 07:45:08.226585] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59295 ] 00:08:45.914 [2024-11-06 07:45:08.415347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.184 [2024-11-06 07:45:08.556782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.562 test_start 00:08:47.562 oneshot 00:08:47.562 tick 100 00:08:47.562 tick 100 00:08:47.563 tick 250 00:08:47.563 tick 100 00:08:47.563 tick 100 00:08:47.563 tick 250 00:08:47.563 tick 100 00:08:47.563 tick 500 00:08:47.563 tick 100 00:08:47.563 tick 100 00:08:47.563 tick 250 00:08:47.563 tick 100 00:08:47.563 tick 100 00:08:47.563 test_end 00:08:47.563 ************************************ 00:08:47.563 END TEST event_reactor 00:08:47.563 ************************************ 00:08:47.563 00:08:47.563 real 0m1.605s 00:08:47.563 user 0m1.390s 00:08:47.563 sys 0m0.103s 00:08:47.563 07:45:09 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.563 07:45:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:47.563 07:45:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:47.563 07:45:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:47.563 07:45:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.563 07:45:09 event -- common/autotest_common.sh@10 -- # set +x 00:08:47.563 ************************************ 00:08:47.563 START TEST event_reactor_perf 00:08:47.563 ************************************ 00:08:47.563 07:45:09 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:47.563 [2024-11-06 07:45:09.889764] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:47.563 [2024-11-06 07:45:09.890144] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59332 ] 00:08:47.563 [2024-11-06 07:45:10.077822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.821 [2024-11-06 07:45:10.215411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.257 test_start 00:08:49.257 test_end 00:08:49.257 Performance: 275424 events per second 00:08:49.257 00:08:49.257 real 0m1.620s 00:08:49.257 user 0m1.402s 00:08:49.257 sys 0m0.105s 00:08:49.257 07:45:11 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.257 07:45:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:49.257 ************************************ 00:08:49.257 END TEST event_reactor_perf 00:08:49.257 ************************************ 00:08:49.257 07:45:11 event -- event/event.sh@49 -- # uname -s 00:08:49.257 07:45:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:49.257 07:45:11 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:49.257 07:45:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.257 07:45:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.257 07:45:11 event -- common/autotest_common.sh@10 -- # set +x 00:08:49.257 ************************************ 00:08:49.257 START TEST event_scheduler 00:08:49.257 ************************************ 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:49.257 * Looking for test storage... 00:08:49.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.257 07:45:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:49.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.257 --rc genhtml_branch_coverage=1 00:08:49.257 --rc genhtml_function_coverage=1 00:08:49.257 --rc genhtml_legend=1 00:08:49.257 --rc geninfo_all_blocks=1 00:08:49.257 --rc geninfo_unexecuted_blocks=1 00:08:49.257 00:08:49.257 ' 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:49.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.257 --rc genhtml_branch_coverage=1 00:08:49.257 --rc genhtml_function_coverage=1 00:08:49.257 --rc genhtml_legend=1 00:08:49.257 --rc geninfo_all_blocks=1 00:08:49.257 --rc geninfo_unexecuted_blocks=1 00:08:49.257 00:08:49.257 ' 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:49.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.257 --rc genhtml_branch_coverage=1 00:08:49.257 --rc genhtml_function_coverage=1 00:08:49.257 --rc genhtml_legend=1 00:08:49.257 --rc geninfo_all_blocks=1 00:08:49.257 --rc geninfo_unexecuted_blocks=1 00:08:49.257 00:08:49.257 ' 00:08:49.257 07:45:11 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:49.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.257 --rc genhtml_branch_coverage=1 00:08:49.257 --rc genhtml_function_coverage=1 00:08:49.257 --rc genhtml_legend=1 00:08:49.257 --rc geninfo_all_blocks=1 00:08:49.257 --rc geninfo_unexecuted_blocks=1 00:08:49.258 00:08:49.258 ' 00:08:49.258 07:45:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:49.258 07:45:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59408 00:08:49.258 07:45:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:49.258 07:45:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:49.258 07:45:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59408 00:08:49.258 07:45:11 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59408 ']' 00:08:49.258 07:45:11 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.258 07:45:11 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.258 07:45:11 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.258 07:45:11 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.258 07:45:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:49.258 [2024-11-06 07:45:11.821991] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:49.258 [2024-11-06 07:45:11.822975] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59408 ] 00:08:49.517 [2024-11-06 07:45:12.012813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.777 [2024-11-06 07:45:12.178632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.777 [2024-11-06 07:45:12.178756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.777 [2024-11-06 07:45:12.178915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.777 [2024-11-06 07:45:12.179165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.345 07:45:12 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.345 07:45:12 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:50.345 07:45:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:50.345 07:45:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.345 07:45:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.345 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.345 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.345 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.345 POWER: Cannot set governor of lcore 0 to performance 00:08:50.345 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.345 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.345 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:50.345 POWER: Cannot set governor of lcore 0 to userspace 00:08:50.345 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:50.345 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:50.345 POWER: Unable to set Power Management Environment for lcore 0 00:08:50.345 [2024-11-06 07:45:12.829481] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:08:50.345 [2024-11-06 07:45:12.829513] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:08:50.345 [2024-11-06 07:45:12.829528] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:50.345 [2024-11-06 07:45:12.829553] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:50.345 [2024-11-06 07:45:12.829566] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:50.345 [2024-11-06 07:45:12.829579] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:50.345 07:45:12 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.345 07:45:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:50.345 07:45:12 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.345 07:45:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.604 [2024-11-06 07:45:13.185760] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:50.604 07:45:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.604 07:45:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:50.604 07:45:13 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.604 07:45:13 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.604 07:45:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:50.604 ************************************ 00:08:50.604 START TEST scheduler_create_thread 00:08:50.604 ************************************ 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.605 2 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.605 3 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.605 4 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.605 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.865 5 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.865 6 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.865 7 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.865 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.866 8 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.866 9 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.866 10 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.866 07:45:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:52.243 07:45:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.243 07:45:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:52.243 07:45:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:52.243 07:45:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.243 07:45:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.222 ************************************ 00:08:53.222 END TEST scheduler_create_thread 00:08:53.222 ************************************ 00:08:53.222 07:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.222 00:08:53.222 real 0m2.621s 00:08:53.222 user 0m0.016s 00:08:53.222 sys 0m0.010s 00:08:53.222 07:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.222 07:45:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.481 07:45:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:53.481 07:45:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59408 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59408 ']' 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59408 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59408 00:08:53.482 killing process with pid 59408 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59408' 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59408 00:08:53.482 07:45:15 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59408 00:08:53.741 [2024-11-06 07:45:16.301123] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:55.116 00:08:55.116 real 0m5.946s 00:08:55.116 user 0m10.383s 00:08:55.116 sys 0m0.552s 00:08:55.116 ************************************ 00:08:55.116 END TEST event_scheduler 00:08:55.116 ************************************ 00:08:55.116 07:45:17 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.116 07:45:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:55.116 07:45:17 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:55.116 07:45:17 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:55.116 07:45:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.116 07:45:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.116 07:45:17 event -- common/autotest_common.sh@10 -- # set +x 00:08:55.116 ************************************ 00:08:55.116 START TEST app_repeat 00:08:55.116 ************************************ 00:08:55.116 07:45:17 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59519 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:55.116 Process app_repeat pid: 59519 00:08:55.116 spdk_app_start Round 0 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59519' 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:55.116 07:45:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59519 /var/tmp/spdk-nbd.sock 00:08:55.116 07:45:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59519 ']' 00:08:55.116 07:45:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.116 07:45:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.116 07:45:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.116 07:45:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.116 07:45:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:55.116 [2024-11-06 07:45:17.572475] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:08:55.117 [2024-11-06 07:45:17.572802] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59519 ] 00:08:55.376 [2024-11-06 07:45:17.748351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.376 [2024-11-06 07:45:17.886626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.376 [2024-11-06 07:45:17.886630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.315 07:45:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.315 07:45:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:56.315 07:45:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:56.573 Malloc0 00:08:56.573 07:45:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:56.831 Malloc1 00:08:57.100 07:45:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.100 07:45:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:57.361 /dev/nbd0 00:08:57.361 07:45:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:57.361 07:45:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.361 1+0 records in 00:08:57.361 1+0 records out 00:08:57.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805832 s, 5.1 MB/s 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:57.361 07:45:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:57.361 07:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.361 07:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.361 07:45:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:57.619 /dev/nbd1 00:08:57.619 07:45:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:57.619 07:45:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:57.619 1+0 records in 00:08:57.619 1+0 records out 00:08:57.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439915 s, 9.3 MB/s 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:57.619 07:45:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:57.619 07:45:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:57.619 07:45:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:57.619 07:45:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:57.619 07:45:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.619 07:45:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:58.244 { 00:08:58.244 "nbd_device": "/dev/nbd0", 00:08:58.244 "bdev_name": "Malloc0" 00:08:58.244 }, 00:08:58.244 { 00:08:58.244 "nbd_device": "/dev/nbd1", 00:08:58.244 "bdev_name": "Malloc1" 00:08:58.244 } 00:08:58.244 ]' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:58.244 { 00:08:58.244 "nbd_device": "/dev/nbd0", 00:08:58.244 "bdev_name": "Malloc0" 00:08:58.244 }, 00:08:58.244 { 00:08:58.244 "nbd_device": "/dev/nbd1", 00:08:58.244 "bdev_name": "Malloc1" 00:08:58.244 } 00:08:58.244 ]' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:58.244 /dev/nbd1' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:58.244 /dev/nbd1' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:58.244 256+0 records in 00:08:58.244 256+0 records out 00:08:58.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104376 s, 100 MB/s 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:58.244 256+0 records in 00:08:58.244 256+0 records out 00:08:58.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023824 s, 44.0 MB/s 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.244 256+0 records in 00:08:58.244 256+0 records out 00:08:58.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.037163 s, 28.2 MB/s 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.244 07:45:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:58.247 07:45:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:58.247 07:45:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:58.247 07:45:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.247 07:45:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.247 07:45:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.247 07:45:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:58.247 07:45:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.247 07:45:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.506 07:45:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.506 07:45:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.506 07:45:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.506 07:45:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.506 07:45:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.506 07:45:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.507 07:45:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:58.507 07:45:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.507 07:45:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.507 07:45:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:58.765 07:45:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:58.765 07:45:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:58.765 07:45:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.023 07:45:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.023 07:45:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.023 07:45:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.023 07:45:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:59.023 07:45:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.023 07:45:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.023 07:45:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.023 07:45:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.282 07:45:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.283 07:45:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:59.283 07:45:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:59.283 07:45:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:59.283 07:45:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:59.848 07:45:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:00.784 [2024-11-06 07:45:23.337898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:01.043 [2024-11-06 07:45:23.467995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.043 [2024-11-06 07:45:23.468005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.043 [2024-11-06 07:45:23.662559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:01.043 [2024-11-06 07:45:23.662649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:02.944 07:45:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:02.944 spdk_app_start Round 1 00:09:02.944 07:45:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:02.944 07:45:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59519 /var/tmp/spdk-nbd.sock 00:09:02.944 07:45:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59519 ']' 00:09:02.944 07:45:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.944 07:45:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.944 07:45:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.944 07:45:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.944 07:45:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:02.944 07:45:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.944 07:45:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:02.944 07:45:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.510 Malloc0 00:09:03.510 07:45:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.768 Malloc1 00:09:03.768 07:45:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.768 07:45:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:04.027 /dev/nbd0 00:09:04.027 07:45:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:04.027 07:45:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:04.027 1+0 records in 00:09:04.027 1+0 records out 00:09:04.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271674 s, 15.1 MB/s 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:04.027 07:45:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:04.027 07:45:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.027 07:45:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.027 07:45:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:04.595 /dev/nbd1 00:09:04.595 07:45:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:04.595 07:45:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:04.595 07:45:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:04.595 1+0 records in 00:09:04.595 1+0 records out 00:09:04.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404145 s, 10.1 MB/s 00:09:04.595 07:45:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.596 07:45:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:04.596 07:45:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:04.596 07:45:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:04.596 07:45:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:04.596 07:45:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:04.596 07:45:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:04.596 07:45:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:04.596 07:45:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.596 07:45:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:04.855 { 00:09:04.855 "nbd_device": "/dev/nbd0", 00:09:04.855 "bdev_name": "Malloc0" 00:09:04.855 }, 00:09:04.855 { 00:09:04.855 "nbd_device": "/dev/nbd1", 00:09:04.855 "bdev_name": "Malloc1" 00:09:04.855 } 00:09:04.855 ]' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:04.855 { 00:09:04.855 "nbd_device": "/dev/nbd0", 00:09:04.855 "bdev_name": "Malloc0" 00:09:04.855 }, 00:09:04.855 { 00:09:04.855 "nbd_device": "/dev/nbd1", 00:09:04.855 "bdev_name": "Malloc1" 00:09:04.855 } 00:09:04.855 ]' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:04.855 /dev/nbd1' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:04.855 /dev/nbd1' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:04.855 256+0 records in 00:09:04.855 256+0 records out 00:09:04.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111033 s, 94.4 MB/s 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:04.855 256+0 records in 00:09:04.855 256+0 records out 00:09:04.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301501 s, 34.8 MB/s 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:04.855 256+0 records in 00:09:04.855 256+0 records out 00:09:04.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322147 s, 32.5 MB/s 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.855 07:45:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:04.856 07:45:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.856 07:45:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:04.856 07:45:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:05.113 07:45:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:05.113 07:45:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.113 07:45:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:05.113 07:45:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:05.113 07:45:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:05.113 07:45:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.113 07:45:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.402 07:45:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.969 07:45:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:06.227 07:45:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:06.227 07:45:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:06.794 07:45:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:07.730 [2024-11-06 07:45:30.245877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:07.988 [2024-11-06 07:45:30.374397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.988 [2024-11-06 07:45:30.374407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.988 [2024-11-06 07:45:30.568741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:07.988 [2024-11-06 07:45:30.568879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:09.887 spdk_app_start Round 2 00:09:09.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:09.887 07:45:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:09.887 07:45:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:09.887 07:45:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59519 /var/tmp/spdk-nbd.sock 00:09:09.887 07:45:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59519 ']' 00:09:09.887 07:45:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:09.887 07:45:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.887 07:45:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:09.887 07:45:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.887 07:45:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:10.145 07:45:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.145 07:45:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:10.145 07:45:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:10.403 Malloc0 00:09:10.403 07:45:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:10.662 Malloc1 00:09:10.662 07:45:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:10.662 07:45:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:11.239 /dev/nbd0 00:09:11.239 07:45:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:11.239 07:45:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:11.239 1+0 records in 00:09:11.239 1+0 records out 00:09:11.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656318 s, 6.2 MB/s 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.239 07:45:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:11.239 07:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.239 07:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.239 07:45:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:11.498 /dev/nbd1 00:09:11.498 07:45:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:11.498 07:45:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:11.498 1+0 records in 00:09:11.498 1+0 records out 00:09:11.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386664 s, 10.6 MB/s 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:11.498 07:45:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:11.498 07:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:11.498 07:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:11.498 07:45:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:11.498 07:45:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.498 07:45:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:11.757 { 00:09:11.757 "nbd_device": "/dev/nbd0", 00:09:11.757 "bdev_name": "Malloc0" 00:09:11.757 }, 00:09:11.757 { 00:09:11.757 "nbd_device": "/dev/nbd1", 00:09:11.757 "bdev_name": "Malloc1" 00:09:11.757 } 00:09:11.757 ]' 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:11.757 { 00:09:11.757 "nbd_device": "/dev/nbd0", 00:09:11.757 "bdev_name": "Malloc0" 00:09:11.757 }, 00:09:11.757 { 00:09:11.757 "nbd_device": "/dev/nbd1", 00:09:11.757 "bdev_name": "Malloc1" 00:09:11.757 } 00:09:11.757 ]' 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:11.757 /dev/nbd1' 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:11.757 /dev/nbd1' 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:11.757 07:45:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:12.015 256+0 records in 00:09:12.015 256+0 records out 00:09:12.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00879145 s, 119 MB/s 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:12.015 256+0 records in 00:09:12.015 256+0 records out 00:09:12.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030836 s, 34.0 MB/s 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:12.015 256+0 records in 00:09:12.015 256+0 records out 00:09:12.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343155 s, 30.6 MB/s 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.015 07:45:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:12.287 07:45:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.546 07:45:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.805 07:45:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:12.805 07:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.805 07:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:13.064 07:45:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:13.064 07:45:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:13.323 07:45:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:14.699 [2024-11-06 07:45:37.007342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.699 [2024-11-06 07:45:37.149420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.699 [2024-11-06 07:45:37.149431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.957 [2024-11-06 07:45:37.343596] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:14.957 [2024-11-06 07:45:37.343694] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:16.333 07:45:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59519 /var/tmp/spdk-nbd.sock 00:09:16.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:16.333 07:45:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59519 ']' 00:09:16.333 07:45:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:16.333 07:45:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.333 07:45:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:16.333 07:45:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.333 07:45:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:16.898 07:45:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.898 07:45:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:16.898 07:45:39 event.app_repeat -- event/event.sh@39 -- # killprocess 59519 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59519 ']' 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59519 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59519 00:09:16.899 killing process with pid 59519 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59519' 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59519 00:09:16.899 07:45:39 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59519 00:09:17.836 spdk_app_start is called in Round 0. 00:09:17.836 Shutdown signal received, stop current app iteration 00:09:17.836 Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 reinitialization... 00:09:17.836 spdk_app_start is called in Round 1. 00:09:17.836 Shutdown signal received, stop current app iteration 00:09:17.836 Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 reinitialization... 00:09:17.836 spdk_app_start is called in Round 2. 00:09:17.836 Shutdown signal received, stop current app iteration 00:09:17.836 Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 reinitialization... 00:09:17.836 spdk_app_start is called in Round 3. 00:09:17.836 Shutdown signal received, stop current app iteration 00:09:17.836 07:45:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:17.836 07:45:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:17.836 00:09:17.836 real 0m22.852s 00:09:17.836 user 0m51.123s 00:09:17.836 sys 0m3.267s 00:09:17.836 07:45:40 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.836 07:45:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:17.836 ************************************ 00:09:17.836 END TEST app_repeat 00:09:17.836 ************************************ 00:09:17.836 07:45:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:17.836 07:45:40 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:17.836 07:45:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:17.836 07:45:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.836 07:45:40 event -- common/autotest_common.sh@10 -- # set +x 00:09:17.836 ************************************ 00:09:17.836 START TEST cpu_locks 00:09:17.836 ************************************ 00:09:17.836 07:45:40 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:18.095 * Looking for test storage... 00:09:18.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.095 07:45:40 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:18.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.095 --rc genhtml_branch_coverage=1 00:09:18.095 --rc genhtml_function_coverage=1 00:09:18.095 --rc genhtml_legend=1 00:09:18.095 --rc geninfo_all_blocks=1 00:09:18.095 --rc geninfo_unexecuted_blocks=1 00:09:18.095 00:09:18.095 ' 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:18.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.095 --rc genhtml_branch_coverage=1 00:09:18.095 --rc genhtml_function_coverage=1 00:09:18.095 --rc genhtml_legend=1 00:09:18.095 --rc geninfo_all_blocks=1 00:09:18.095 --rc geninfo_unexecuted_blocks=1 00:09:18.095 00:09:18.095 ' 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:18.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.095 --rc genhtml_branch_coverage=1 00:09:18.095 --rc genhtml_function_coverage=1 00:09:18.095 --rc genhtml_legend=1 00:09:18.095 --rc geninfo_all_blocks=1 00:09:18.095 --rc geninfo_unexecuted_blocks=1 00:09:18.095 00:09:18.095 ' 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:18.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.095 --rc genhtml_branch_coverage=1 00:09:18.095 --rc genhtml_function_coverage=1 00:09:18.095 --rc genhtml_legend=1 00:09:18.095 --rc geninfo_all_blocks=1 00:09:18.095 --rc geninfo_unexecuted_blocks=1 00:09:18.095 00:09:18.095 ' 00:09:18.095 07:45:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:18.095 07:45:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:18.095 07:45:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:18.095 07:45:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.095 07:45:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.095 ************************************ 00:09:18.095 START TEST default_locks 00:09:18.095 ************************************ 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60007 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60007 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60007 ']' 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.095 07:45:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.354 [2024-11-06 07:45:40.755779] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:18.354 [2024-11-06 07:45:40.755967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60007 ] 00:09:18.354 [2024-11-06 07:45:40.950412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.612 [2024-11-06 07:45:41.122533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.548 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.548 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:09:19.548 07:45:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60007 00:09:19.548 07:45:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60007 00:09:19.548 07:45:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60007 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60007 ']' 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60007 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60007 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.114 killing process with pid 60007 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60007' 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60007 00:09:20.114 07:45:42 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60007 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60007 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60007 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60007 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60007 ']' 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60007) - No such process 00:09:22.647 ERROR: process (pid: 60007) is no longer running 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:22.647 00:09:22.647 real 0m4.229s 00:09:22.647 user 0m4.343s 00:09:22.647 sys 0m0.808s 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.647 ************************************ 00:09:22.647 END TEST default_locks 00:09:22.647 ************************************ 00:09:22.647 07:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.647 07:45:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:22.647 07:45:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:22.647 07:45:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.647 07:45:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:22.647 ************************************ 00:09:22.647 START TEST default_locks_via_rpc 00:09:22.647 ************************************ 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60084 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60084 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60084 ']' 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.647 07:45:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.647 [2024-11-06 07:45:45.024020] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:22.647 [2024-11-06 07:45:45.024214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60084 ] 00:09:22.647 [2024-11-06 07:45:45.211591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.906 [2024-11-06 07:45:45.417042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60084 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60084 00:09:23.842 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:24.100 07:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60084 00:09:24.100 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60084 ']' 00:09:24.100 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60084 00:09:24.100 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:09:24.100 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:24.100 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60084 00:09:24.358 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:24.358 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:24.358 killing process with pid 60084 00:09:24.358 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60084' 00:09:24.358 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60084 00:09:24.358 07:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60084 00:09:26.888 00:09:26.888 real 0m4.186s 00:09:26.888 user 0m4.250s 00:09:26.888 sys 0m0.785s 00:09:26.888 07:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.888 07:45:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.888 ************************************ 00:09:26.888 END TEST default_locks_via_rpc 00:09:26.888 ************************************ 00:09:26.888 07:45:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:26.888 07:45:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:26.888 07:45:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.888 07:45:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:26.888 ************************************ 00:09:26.888 START TEST non_locking_app_on_locked_coremask 00:09:26.888 ************************************ 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60158 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60158 /var/tmp/spdk.sock 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60158 ']' 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.888 07:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:26.888 [2024-11-06 07:45:49.232762] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:26.888 [2024-11-06 07:45:49.232925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60158 ] 00:09:26.888 [2024-11-06 07:45:49.411936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.147 [2024-11-06 07:45:49.542967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60175 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60175 /var/tmp/spdk2.sock 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60175 ']' 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.081 07:45:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.081 [2024-11-06 07:45:50.542160] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:28.082 [2024-11-06 07:45:50.542430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60175 ] 00:09:28.340 [2024-11-06 07:45:50.745246] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:28.340 [2024-11-06 07:45:50.745346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.599 [2024-11-06 07:45:51.014391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.129 07:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.129 07:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:31.129 07:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60158 00:09:31.129 07:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60158 00:09:31.129 07:45:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60158 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60158 ']' 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60158 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60158 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.696 killing process with pid 60158 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60158' 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60158 00:09:31.696 07:45:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60158 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60175 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60175 ']' 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60175 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60175 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60175' 00:09:36.979 killing process with pid 60175 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60175 00:09:36.979 07:45:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60175 00:09:38.881 00:09:38.881 real 0m12.098s 00:09:38.882 user 0m12.658s 00:09:38.882 sys 0m1.544s 00:09:38.882 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.882 ************************************ 00:09:38.882 END TEST non_locking_app_on_locked_coremask 00:09:38.882 ************************************ 00:09:38.882 07:46:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.882 07:46:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:38.882 07:46:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.882 07:46:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.882 07:46:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:38.882 ************************************ 00:09:38.882 START TEST locking_app_on_unlocked_coremask 00:09:38.882 ************************************ 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60329 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60329 /var/tmp/spdk.sock 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60329 ']' 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.882 07:46:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.882 [2024-11-06 07:46:01.392581] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:38.882 [2024-11-06 07:46:01.392743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:09:39.140 [2024-11-06 07:46:01.572946] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:39.140 [2024-11-06 07:46:01.573030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.140 [2024-11-06 07:46:01.720906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60350 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60350 /var/tmp/spdk2.sock 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60350 ']' 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:40.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.076 07:46:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.334 [2024-11-06 07:46:02.775216] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:40.334 [2024-11-06 07:46:02.775904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60350 ] 00:09:40.591 [2024-11-06 07:46:02.981056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.847 [2024-11-06 07:46:03.266953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.404 07:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.404 07:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:43.404 07:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60350 00:09:43.404 07:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:43.404 07:46:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60350 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60329 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60329 ']' 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60329 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60329 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.970 killing process with pid 60329 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60329' 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60329 00:09:43.970 07:46:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60329 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60350 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60350 ']' 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60350 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60350 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.234 killing process with pid 60350 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60350' 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60350 00:09:49.234 07:46:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60350 00:09:51.181 00:09:51.181 real 0m12.267s 00:09:51.181 user 0m12.908s 00:09:51.181 sys 0m1.570s 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.181 ************************************ 00:09:51.181 END TEST locking_app_on_unlocked_coremask 00:09:51.181 ************************************ 00:09:51.181 07:46:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:51.181 07:46:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:51.181 07:46:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.181 07:46:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:51.181 ************************************ 00:09:51.181 START TEST locking_app_on_locked_coremask 00:09:51.181 ************************************ 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60504 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60504 /var/tmp/spdk.sock 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60504 ']' 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.181 07:46:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:51.181 [2024-11-06 07:46:13.695010] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:51.181 [2024-11-06 07:46:13.695180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60504 ] 00:09:51.439 [2024-11-06 07:46:13.871059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.439 [2024-11-06 07:46:14.006376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60520 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60520 /var/tmp/spdk2.sock 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60520 /var/tmp/spdk2.sock 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60520 /var/tmp/spdk2.sock 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60520 ']' 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.374 07:46:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:52.633 [2024-11-06 07:46:15.013005] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:52.633 [2024-11-06 07:46:15.013166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60520 ] 00:09:52.633 [2024-11-06 07:46:15.210779] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60504 has claimed it. 00:09:52.633 [2024-11-06 07:46:15.210890] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:53.200 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60520) - No such process 00:09:53.200 ERROR: process (pid: 60520) is no longer running 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60504 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60504 00:09:53.200 07:46:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60504 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60504 ']' 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60504 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60504 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.768 killing process with pid 60504 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60504' 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60504 00:09:53.768 07:46:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60504 00:09:56.297 00:09:56.297 real 0m4.839s 00:09:56.297 user 0m5.200s 00:09:56.297 sys 0m0.874s 00:09:56.297 07:46:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.297 07:46:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.297 ************************************ 00:09:56.297 END TEST locking_app_on_locked_coremask 00:09:56.297 ************************************ 00:09:56.297 07:46:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:56.297 07:46:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:56.297 07:46:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.297 07:46:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:56.297 ************************************ 00:09:56.297 START TEST locking_overlapped_coremask 00:09:56.297 ************************************ 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60595 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60595 /var/tmp/spdk.sock 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60595 ']' 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.297 07:46:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:56.297 [2024-11-06 07:46:18.613719] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:56.297 [2024-11-06 07:46:18.614453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60595 ] 00:09:56.297 [2024-11-06 07:46:18.803936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:56.555 [2024-11-06 07:46:18.959332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.555 [2024-11-06 07:46:18.959849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.555 [2024-11-06 07:46:18.959859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60613 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60613 /var/tmp/spdk2.sock 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60613 /var/tmp/spdk2.sock 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60613 /var/tmp/spdk2.sock 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60613 ']' 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.491 07:46:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:57.491 [2024-11-06 07:46:19.979469] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:09:57.491 [2024-11-06 07:46:19.979651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60613 ] 00:09:57.749 [2024-11-06 07:46:20.178134] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60595 has claimed it. 00:09:57.749 [2024-11-06 07:46:20.178239] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:58.315 ERROR: process (pid: 60613) is no longer running 00:09:58.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60613) - No such process 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60595 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60595 ']' 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60595 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60595 00:09:58.315 killing process with pid 60595 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60595' 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60595 00:09:58.315 07:46:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60595 00:10:00.847 ************************************ 00:10:00.847 END TEST locking_overlapped_coremask 00:10:00.847 ************************************ 00:10:00.847 00:10:00.847 real 0m4.782s 00:10:00.847 user 0m12.958s 00:10:00.847 sys 0m0.722s 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:00.847 07:46:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:00.847 07:46:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:00.847 07:46:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.847 07:46:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.847 ************************************ 00:10:00.847 START TEST locking_overlapped_coremask_via_rpc 00:10:00.847 ************************************ 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60683 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60683 /var/tmp/spdk.sock 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60683 ']' 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.847 07:46:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.847 [2024-11-06 07:46:23.436316] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:00.847 [2024-11-06 07:46:23.436512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60683 ] 00:10:01.113 [2024-11-06 07:46:23.626606] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:01.113 [2024-11-06 07:46:23.627040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:01.371 [2024-11-06 07:46:23.840282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.371 [2024-11-06 07:46:23.840379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.371 [2024-11-06 07:46:23.840385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60706 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60706 /var/tmp/spdk2.sock 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60706 ']' 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.744 07:46:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.744 [2024-11-06 07:46:25.133113] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:02.744 [2024-11-06 07:46:25.133518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60706 ] 00:10:02.744 [2024-11-06 07:46:25.331187] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:02.744 [2024-11-06 07:46:25.331283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:03.002 [2024-11-06 07:46:25.609086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.002 [2024-11-06 07:46:25.612784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.002 [2024-11-06 07:46:25.612787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.534 07:46:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 [2024-11-06 07:46:27.991569] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60683 has claimed it. 00:10:05.534 request: 00:10:05.534 { 00:10:05.534 "method": "framework_enable_cpumask_locks", 00:10:05.534 "req_id": 1 00:10:05.534 } 00:10:05.534 Got JSON-RPC error response 00:10:05.534 response: 00:10:05.534 { 00:10:05.534 "code": -32603, 00:10:05.534 "message": "Failed to claim CPU core: 2" 00:10:05.534 } 00:10:05.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60683 /var/tmp/spdk.sock 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60683 ']' 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.534 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60706 /var/tmp/spdk2.sock 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60706 ']' 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:05.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.793 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.052 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.052 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:06.052 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:06.052 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:06.052 ************************************ 00:10:06.052 END TEST locking_overlapped_coremask_via_rpc 00:10:06.052 ************************************ 00:10:06.052 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:06.052 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:06.052 00:10:06.052 real 0m5.370s 00:10:06.052 user 0m2.059s 00:10:06.052 sys 0m0.265s 00:10:06.052 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.052 07:46:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.314 07:46:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:06.314 07:46:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60683 ]] 00:10:06.314 07:46:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60683 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60683 ']' 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60683 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60683 00:10:06.314 killing process with pid 60683 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60683' 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60683 00:10:06.314 07:46:28 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60683 00:10:08.856 07:46:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60706 ]] 00:10:08.856 07:46:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60706 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60706 ']' 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60706 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60706 00:10:08.856 killing process with pid 60706 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60706' 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60706 00:10:08.856 07:46:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60706 00:10:11.389 07:46:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:11.389 Process with pid 60683 is not found 00:10:11.389 07:46:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:11.390 07:46:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60683 ]] 00:10:11.390 07:46:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60683 00:10:11.390 07:46:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60683 ']' 00:10:11.390 07:46:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60683 00:10:11.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60683) - No such process 00:10:11.390 07:46:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60683 is not found' 00:10:11.390 07:46:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60706 ]] 00:10:11.390 07:46:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60706 00:10:11.390 07:46:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60706 ']' 00:10:11.390 07:46:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60706 00:10:11.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60706) - No such process 00:10:11.390 Process with pid 60706 is not found 00:10:11.390 07:46:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60706 is not found' 00:10:11.390 07:46:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:11.390 00:10:11.390 real 0m52.988s 00:10:11.390 user 1m32.659s 00:10:11.390 sys 0m7.853s 00:10:11.390 07:46:33 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.390 ************************************ 00:10:11.390 END TEST cpu_locks 00:10:11.390 ************************************ 00:10:11.390 07:46:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.390 ************************************ 00:10:11.390 END TEST event 00:10:11.390 ************************************ 00:10:11.390 00:10:11.390 real 1m27.228s 00:10:11.390 user 2m41.614s 00:10:11.390 sys 0m12.287s 00:10:11.390 07:46:33 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.390 07:46:33 event -- common/autotest_common.sh@10 -- # set +x 00:10:11.390 07:46:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:11.390 07:46:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:11.390 07:46:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.390 07:46:33 -- common/autotest_common.sh@10 -- # set +x 00:10:11.390 ************************************ 00:10:11.390 START TEST thread 00:10:11.390 ************************************ 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:11.390 * Looking for test storage... 00:10:11.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:11.390 07:46:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.390 07:46:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.390 07:46:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.390 07:46:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.390 07:46:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.390 07:46:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.390 07:46:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.390 07:46:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.390 07:46:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.390 07:46:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.390 07:46:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.390 07:46:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:11.390 07:46:33 thread -- scripts/common.sh@345 -- # : 1 00:10:11.390 07:46:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.390 07:46:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.390 07:46:33 thread -- scripts/common.sh@365 -- # decimal 1 00:10:11.390 07:46:33 thread -- scripts/common.sh@353 -- # local d=1 00:10:11.390 07:46:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.390 07:46:33 thread -- scripts/common.sh@355 -- # echo 1 00:10:11.390 07:46:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.390 07:46:33 thread -- scripts/common.sh@366 -- # decimal 2 00:10:11.390 07:46:33 thread -- scripts/common.sh@353 -- # local d=2 00:10:11.390 07:46:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.390 07:46:33 thread -- scripts/common.sh@355 -- # echo 2 00:10:11.390 07:46:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.390 07:46:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.390 07:46:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.390 07:46:33 thread -- scripts/common.sh@368 -- # return 0 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:11.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.390 --rc genhtml_branch_coverage=1 00:10:11.390 --rc genhtml_function_coverage=1 00:10:11.390 --rc genhtml_legend=1 00:10:11.390 --rc geninfo_all_blocks=1 00:10:11.390 --rc geninfo_unexecuted_blocks=1 00:10:11.390 00:10:11.390 ' 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:11.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.390 --rc genhtml_branch_coverage=1 00:10:11.390 --rc genhtml_function_coverage=1 00:10:11.390 --rc genhtml_legend=1 00:10:11.390 --rc geninfo_all_blocks=1 00:10:11.390 --rc geninfo_unexecuted_blocks=1 00:10:11.390 00:10:11.390 ' 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:11.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.390 --rc genhtml_branch_coverage=1 00:10:11.390 --rc genhtml_function_coverage=1 00:10:11.390 --rc genhtml_legend=1 00:10:11.390 --rc geninfo_all_blocks=1 00:10:11.390 --rc geninfo_unexecuted_blocks=1 00:10:11.390 00:10:11.390 ' 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:11.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.390 --rc genhtml_branch_coverage=1 00:10:11.390 --rc genhtml_function_coverage=1 00:10:11.390 --rc genhtml_legend=1 00:10:11.390 --rc geninfo_all_blocks=1 00:10:11.390 --rc geninfo_unexecuted_blocks=1 00:10:11.390 00:10:11.390 ' 00:10:11.390 07:46:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.390 07:46:33 thread -- common/autotest_common.sh@10 -- # set +x 00:10:11.390 ************************************ 00:10:11.390 START TEST thread_poller_perf 00:10:11.390 ************************************ 00:10:11.390 07:46:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:11.390 [2024-11-06 07:46:33.747675] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:11.390 [2024-11-06 07:46:33.747832] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60907 ] 00:10:11.390 [2024-11-06 07:46:33.927111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.653 [2024-11-06 07:46:34.075860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.653 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:13.034 [2024-11-06T07:46:35.663Z] ====================================== 00:10:13.034 [2024-11-06T07:46:35.663Z] busy:2214542924 (cyc) 00:10:13.034 [2024-11-06T07:46:35.663Z] total_run_count: 294000 00:10:13.034 [2024-11-06T07:46:35.663Z] tsc_hz: 2200000000 (cyc) 00:10:13.034 [2024-11-06T07:46:35.663Z] ====================================== 00:10:13.034 [2024-11-06T07:46:35.663Z] poller_cost: 7532 (cyc), 3423 (nsec) 00:10:13.034 00:10:13.034 real 0m1.662s 00:10:13.034 user 0m1.431s 00:10:13.034 sys 0m0.119s 00:10:13.034 07:46:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.034 ************************************ 00:10:13.034 07:46:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 END TEST thread_poller_perf 00:10:13.034 ************************************ 00:10:13.034 07:46:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:13.034 07:46:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:13.034 07:46:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.034 07:46:35 thread -- common/autotest_common.sh@10 -- # set +x 00:10:13.034 ************************************ 00:10:13.034 START TEST thread_poller_perf 00:10:13.034 ************************************ 00:10:13.034 07:46:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:13.034 [2024-11-06 07:46:35.456236] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:13.034 [2024-11-06 07:46:35.456793] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60951 ] 00:10:13.034 [2024-11-06 07:46:35.641854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.293 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:13.293 [2024-11-06 07:46:35.799421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.668 [2024-11-06T07:46:37.297Z] ====================================== 00:10:14.668 [2024-11-06T07:46:37.297Z] busy:2204917743 (cyc) 00:10:14.668 [2024-11-06T07:46:37.297Z] total_run_count: 3468000 00:10:14.668 [2024-11-06T07:46:37.297Z] tsc_hz: 2200000000 (cyc) 00:10:14.668 [2024-11-06T07:46:37.297Z] ====================================== 00:10:14.668 [2024-11-06T07:46:37.297Z] poller_cost: 635 (cyc), 288 (nsec) 00:10:14.668 00:10:14.668 real 0m1.619s 00:10:14.668 user 0m1.399s 00:10:14.668 sys 0m0.110s 00:10:14.668 07:46:37 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.668 ************************************ 00:10:14.668 END TEST thread_poller_perf 00:10:14.668 ************************************ 00:10:14.668 07:46:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:14.668 07:46:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:14.668 00:10:14.668 real 0m3.578s 00:10:14.668 user 0m2.988s 00:10:14.668 sys 0m0.364s 00:10:14.668 07:46:37 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.668 ************************************ 00:10:14.668 END TEST thread 00:10:14.668 ************************************ 00:10:14.668 07:46:37 thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.668 07:46:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:14.668 07:46:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:14.668 07:46:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.668 07:46:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.668 07:46:37 -- common/autotest_common.sh@10 -- # set +x 00:10:14.668 ************************************ 00:10:14.668 START TEST app_cmdline 00:10:14.668 ************************************ 00:10:14.668 07:46:37 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:14.668 * Looking for test storage... 00:10:14.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:14.668 07:46:37 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:14.668 07:46:37 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:14.668 07:46:37 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.927 07:46:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:14.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:14.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.927 --rc genhtml_branch_coverage=1 00:10:14.927 --rc genhtml_function_coverage=1 00:10:14.927 --rc genhtml_legend=1 00:10:14.927 --rc geninfo_all_blocks=1 00:10:14.927 --rc geninfo_unexecuted_blocks=1 00:10:14.927 00:10:14.927 ' 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:14.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.927 --rc genhtml_branch_coverage=1 00:10:14.927 --rc genhtml_function_coverage=1 00:10:14.927 --rc genhtml_legend=1 00:10:14.927 --rc geninfo_all_blocks=1 00:10:14.927 --rc geninfo_unexecuted_blocks=1 00:10:14.927 00:10:14.927 ' 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:14.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.927 --rc genhtml_branch_coverage=1 00:10:14.927 --rc genhtml_function_coverage=1 00:10:14.927 --rc genhtml_legend=1 00:10:14.927 --rc geninfo_all_blocks=1 00:10:14.927 --rc geninfo_unexecuted_blocks=1 00:10:14.927 00:10:14.927 ' 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:14.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.927 --rc genhtml_branch_coverage=1 00:10:14.927 --rc genhtml_function_coverage=1 00:10:14.927 --rc genhtml_legend=1 00:10:14.927 --rc geninfo_all_blocks=1 00:10:14.927 --rc geninfo_unexecuted_blocks=1 00:10:14.927 00:10:14.927 ' 00:10:14.927 07:46:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:14.927 07:46:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61034 00:10:14.927 07:46:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61034 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61034 ']' 00:10:14.927 07:46:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.927 07:46:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:14.927 [2024-11-06 07:46:37.448314] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:14.927 [2024-11-06 07:46:37.448773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61034 ] 00:10:15.186 [2024-11-06 07:46:37.636465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.186 [2024-11-06 07:46:37.801427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.122 07:46:38 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.122 07:46:38 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:16.122 07:46:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:16.381 { 00:10:16.381 "version": "SPDK v25.01-pre git sha1 ca5713c38", 00:10:16.381 "fields": { 00:10:16.381 "major": 25, 00:10:16.381 "minor": 1, 00:10:16.381 "patch": 0, 00:10:16.381 "suffix": "-pre", 00:10:16.381 "commit": "ca5713c38" 00:10:16.381 } 00:10:16.381 } 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:16.639 07:46:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.639 07:46:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.640 07:46:39 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.640 07:46:39 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:16.640 07:46:39 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.640 07:46:39 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:16.640 07:46:39 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.898 request: 00:10:16.898 { 00:10:16.898 "method": "env_dpdk_get_mem_stats", 00:10:16.898 "req_id": 1 00:10:16.898 } 00:10:16.898 Got JSON-RPC error response 00:10:16.898 response: 00:10:16.898 { 00:10:16.898 "code": -32601, 00:10:16.898 "message": "Method not found" 00:10:16.898 } 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:16.898 07:46:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61034 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61034 ']' 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61034 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61034 00:10:16.898 killing process with pid 61034 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61034' 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@969 -- # kill 61034 00:10:16.898 07:46:39 app_cmdline -- common/autotest_common.sh@974 -- # wait 61034 00:10:19.431 00:10:19.431 real 0m4.562s 00:10:19.431 user 0m5.008s 00:10:19.431 sys 0m0.686s 00:10:19.431 ************************************ 00:10:19.431 END TEST app_cmdline 00:10:19.431 ************************************ 00:10:19.431 07:46:41 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.431 07:46:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:19.431 07:46:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:19.431 07:46:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.431 07:46:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.431 07:46:41 -- common/autotest_common.sh@10 -- # set +x 00:10:19.431 ************************************ 00:10:19.431 START TEST version 00:10:19.431 ************************************ 00:10:19.431 07:46:41 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:19.431 * Looking for test storage... 00:10:19.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:19.431 07:46:41 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:19.431 07:46:41 version -- common/autotest_common.sh@1689 -- # lcov --version 00:10:19.431 07:46:41 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:19.431 07:46:41 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:19.431 07:46:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.431 07:46:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.431 07:46:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.431 07:46:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.431 07:46:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.431 07:46:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.431 07:46:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.431 07:46:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.431 07:46:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.431 07:46:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.431 07:46:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.431 07:46:41 version -- scripts/common.sh@344 -- # case "$op" in 00:10:19.431 07:46:41 version -- scripts/common.sh@345 -- # : 1 00:10:19.431 07:46:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.431 07:46:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.431 07:46:41 version -- scripts/common.sh@365 -- # decimal 1 00:10:19.431 07:46:41 version -- scripts/common.sh@353 -- # local d=1 00:10:19.431 07:46:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.431 07:46:41 version -- scripts/common.sh@355 -- # echo 1 00:10:19.431 07:46:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.431 07:46:41 version -- scripts/common.sh@366 -- # decimal 2 00:10:19.432 07:46:41 version -- scripts/common.sh@353 -- # local d=2 00:10:19.432 07:46:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.432 07:46:41 version -- scripts/common.sh@355 -- # echo 2 00:10:19.432 07:46:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.432 07:46:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.432 07:46:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.432 07:46:41 version -- scripts/common.sh@368 -- # return 0 00:10:19.432 07:46:41 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.432 07:46:41 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.432 --rc genhtml_branch_coverage=1 00:10:19.432 --rc genhtml_function_coverage=1 00:10:19.432 --rc genhtml_legend=1 00:10:19.432 --rc geninfo_all_blocks=1 00:10:19.432 --rc geninfo_unexecuted_blocks=1 00:10:19.432 00:10:19.432 ' 00:10:19.432 07:46:41 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.432 --rc genhtml_branch_coverage=1 00:10:19.432 --rc genhtml_function_coverage=1 00:10:19.432 --rc genhtml_legend=1 00:10:19.432 --rc geninfo_all_blocks=1 00:10:19.432 --rc geninfo_unexecuted_blocks=1 00:10:19.432 00:10:19.432 ' 00:10:19.432 07:46:41 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.432 --rc genhtml_branch_coverage=1 00:10:19.432 --rc genhtml_function_coverage=1 00:10:19.432 --rc genhtml_legend=1 00:10:19.432 --rc geninfo_all_blocks=1 00:10:19.432 --rc geninfo_unexecuted_blocks=1 00:10:19.432 00:10:19.432 ' 00:10:19.432 07:46:41 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:19.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.432 --rc genhtml_branch_coverage=1 00:10:19.432 --rc genhtml_function_coverage=1 00:10:19.432 --rc genhtml_legend=1 00:10:19.432 --rc geninfo_all_blocks=1 00:10:19.432 --rc geninfo_unexecuted_blocks=1 00:10:19.432 00:10:19.432 ' 00:10:19.432 07:46:41 version -- app/version.sh@17 -- # get_header_version major 00:10:19.432 07:46:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:19.432 07:46:41 version -- app/version.sh@14 -- # tr -d '"' 00:10:19.432 07:46:41 version -- app/version.sh@14 -- # cut -f2 00:10:19.432 07:46:41 version -- app/version.sh@17 -- # major=25 00:10:19.432 07:46:41 version -- app/version.sh@18 -- # get_header_version minor 00:10:19.432 07:46:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:19.432 07:46:41 version -- app/version.sh@14 -- # cut -f2 00:10:19.432 07:46:41 version -- app/version.sh@14 -- # tr -d '"' 00:10:19.432 07:46:41 version -- app/version.sh@18 -- # minor=1 00:10:19.432 07:46:41 version -- app/version.sh@19 -- # get_header_version patch 00:10:19.432 07:46:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:19.432 07:46:41 version -- app/version.sh@14 -- # cut -f2 00:10:19.432 07:46:41 version -- app/version.sh@14 -- # tr -d '"' 00:10:19.432 07:46:41 version -- app/version.sh@19 -- # patch=0 00:10:19.432 07:46:41 version -- app/version.sh@20 -- # get_header_version suffix 00:10:19.432 07:46:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:19.432 07:46:41 version -- app/version.sh@14 -- # cut -f2 00:10:19.432 07:46:41 version -- app/version.sh@14 -- # tr -d '"' 00:10:19.432 07:46:41 version -- app/version.sh@20 -- # suffix=-pre 00:10:19.432 07:46:41 version -- app/version.sh@22 -- # version=25.1 00:10:19.432 07:46:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:19.432 07:46:41 version -- app/version.sh@28 -- # version=25.1rc0 00:10:19.432 07:46:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:19.432 07:46:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:19.432 07:46:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:19.432 07:46:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:19.432 ************************************ 00:10:19.432 END TEST version 00:10:19.432 ************************************ 00:10:19.432 00:10:19.432 real 0m0.240s 00:10:19.432 user 0m0.166s 00:10:19.432 sys 0m0.111s 00:10:19.432 07:46:41 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.432 07:46:41 version -- common/autotest_common.sh@10 -- # set +x 00:10:19.432 07:46:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:19.432 07:46:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:19.432 07:46:42 -- spdk/autotest.sh@194 -- # uname -s 00:10:19.432 07:46:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:19.432 07:46:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:19.432 07:46:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:19.432 07:46:42 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:19.432 07:46:42 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:19.432 07:46:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.432 07:46:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.432 07:46:42 -- common/autotest_common.sh@10 -- # set +x 00:10:19.432 ************************************ 00:10:19.432 START TEST blockdev_nvme 00:10:19.432 ************************************ 00:10:19.432 07:46:42 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:19.691 * Looking for test storage... 00:10:19.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1689 -- # lcov --version 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.691 07:46:42 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.691 --rc genhtml_branch_coverage=1 00:10:19.691 --rc genhtml_function_coverage=1 00:10:19.691 --rc genhtml_legend=1 00:10:19.691 --rc geninfo_all_blocks=1 00:10:19.691 --rc geninfo_unexecuted_blocks=1 00:10:19.691 00:10:19.691 ' 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.691 --rc genhtml_branch_coverage=1 00:10:19.691 --rc genhtml_function_coverage=1 00:10:19.691 --rc genhtml_legend=1 00:10:19.691 --rc geninfo_all_blocks=1 00:10:19.691 --rc geninfo_unexecuted_blocks=1 00:10:19.691 00:10:19.691 ' 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.691 --rc genhtml_branch_coverage=1 00:10:19.691 --rc genhtml_function_coverage=1 00:10:19.691 --rc genhtml_legend=1 00:10:19.691 --rc geninfo_all_blocks=1 00:10:19.691 --rc geninfo_unexecuted_blocks=1 00:10:19.691 00:10:19.691 ' 00:10:19.691 07:46:42 blockdev_nvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:19.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.691 --rc genhtml_branch_coverage=1 00:10:19.691 --rc genhtml_function_coverage=1 00:10:19.691 --rc genhtml_legend=1 00:10:19.691 --rc geninfo_all_blocks=1 00:10:19.691 --rc geninfo_unexecuted_blocks=1 00:10:19.692 00:10:19.692 ' 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:19.692 07:46:42 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61223 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:19.692 07:46:42 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61223 00:10:19.692 07:46:42 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 61223 ']' 00:10:19.692 07:46:42 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.692 07:46:42 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.692 07:46:42 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.692 07:46:42 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.692 07:46:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.951 [2024-11-06 07:46:42.389100] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:19.951 [2024-11-06 07:46:42.389352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61223 ] 00:10:20.210 [2024-11-06 07:46:42.597159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.210 [2024-11-06 07:46:42.763017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.188 07:46:43 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.188 07:46:43 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:10:21.188 07:46:43 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:21.188 07:46:43 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:10:21.188 07:46:43 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:21.188 07:46:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:21.188 07:46:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:21.188 07:46:43 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:21.188 07:46:43 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.188 07:46:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.446 07:46:44 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.446 07:46:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:10:21.446 07:46:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.446 07:46:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.446 07:46:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.704 07:46:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.704 07:46:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:21.704 07:46:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.704 07:46:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.704 07:46:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.704 07:46:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:21.704 07:46:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:21.704 07:46:44 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:21.704 07:46:44 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.704 07:46:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:21.704 07:46:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.704 07:46:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:21.704 07:46:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:21.705 07:46:44 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8c895b46-41ef-412a-a698-4c95dab3eb26"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8c895b46-41ef-412a-a698-4c95dab3eb26",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "44d2ee87-2fe9-4ae5-91e9-82cd3672d25d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "44d2ee87-2fe9-4ae5-91e9-82cd3672d25d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9a950db4-5276-4fa1-94f5-d9f6621154a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9a950db4-5276-4fa1-94f5-d9f6621154a4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "906e6e37-cbc0-4e49-8b22-6db3a9b386cb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "906e6e37-cbc0-4e49-8b22-6db3a9b386cb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "053cf29b-e251-419d-90a3-b3bb9a8b685a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "053cf29b-e251-419d-90a3-b3bb9a8b685a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d2bee917-8cef-4a50-9f03-266c9ae39879"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d2bee917-8cef-4a50-9f03-266c9ae39879",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:21.705 07:46:44 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:21.705 07:46:44 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:21.705 07:46:44 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:21.705 07:46:44 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61223 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 61223 ']' 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 61223 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61223 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.705 killing process with pid 61223 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61223' 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 61223 00:10:21.705 07:46:44 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 61223 00:10:24.234 07:46:46 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:24.234 07:46:46 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:24.234 07:46:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:24.234 07:46:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.234 07:46:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.234 ************************************ 00:10:24.234 START TEST bdev_hello_world 00:10:24.234 ************************************ 00:10:24.234 07:46:46 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:24.234 [2024-11-06 07:46:46.667785] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:24.234 [2024-11-06 07:46:46.668007] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61318 ] 00:10:24.234 [2024-11-06 07:46:46.861630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.492 [2024-11-06 07:46:47.021313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.428 [2024-11-06 07:46:47.704583] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:25.428 [2024-11-06 07:46:47.704654] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:25.428 [2024-11-06 07:46:47.704707] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:25.428 [2024-11-06 07:46:47.708384] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:25.428 [2024-11-06 07:46:47.708991] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:25.428 [2024-11-06 07:46:47.709039] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:25.428 [2024-11-06 07:46:47.709309] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:25.428 00:10:25.428 [2024-11-06 07:46:47.709375] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:26.365 00:10:26.365 real 0m2.264s 00:10:26.365 user 0m1.860s 00:10:26.365 sys 0m0.290s 00:10:26.365 07:46:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.365 07:46:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:26.365 ************************************ 00:10:26.365 END TEST bdev_hello_world 00:10:26.365 ************************************ 00:10:26.365 07:46:48 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:26.365 07:46:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.365 07:46:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.365 07:46:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:26.365 ************************************ 00:10:26.365 START TEST bdev_bounds 00:10:26.365 ************************************ 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61368 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:26.365 Process bdevio pid: 61368 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61368' 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61368 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61368 ']' 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.365 07:46:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:26.365 [2024-11-06 07:46:48.961327] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:26.365 [2024-11-06 07:46:48.961505] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61368 ] 00:10:26.623 [2024-11-06 07:46:49.137575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.882 [2024-11-06 07:46:49.283445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.882 [2024-11-06 07:46:49.283526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.882 [2024-11-06 07:46:49.283538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.450 07:46:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:27.450 07:46:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:10:27.450 07:46:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:27.762 I/O targets: 00:10:27.762 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:27.762 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:27.762 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:27.762 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:27.762 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:27.762 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:27.762 00:10:27.762 00:10:27.762 CUnit - A unit testing framework for C - Version 2.1-3 00:10:27.762 http://cunit.sourceforge.net/ 00:10:27.762 00:10:27.762 00:10:27.762 Suite: bdevio tests on: Nvme3n1 00:10:27.762 Test: blockdev write read block ...passed 00:10:27.762 Test: blockdev write zeroes read block ...passed 00:10:27.762 Test: blockdev write zeroes read no split ...passed 00:10:27.762 Test: blockdev write zeroes read split ...passed 00:10:27.762 Test: blockdev write zeroes read split partial ...passed 00:10:27.762 Test: blockdev reset ...[2024-11-06 07:46:50.224711] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:27.762 [2024-11-06 07:46:50.228727] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:27.762 passed 00:10:27.762 Test: blockdev write read 8 blocks ...passed 00:10:27.762 Test: blockdev write read size > 128k ...passed 00:10:27.762 Test: blockdev write read invalid size ...passed 00:10:27.762 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:27.762 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:27.762 Test: blockdev write read max offset ...passed 00:10:27.762 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:27.762 Test: blockdev writev readv 8 blocks ...passed 00:10:27.762 Test: blockdev writev readv 30 x 1block ...passed 00:10:27.762 Test: blockdev writev readv block ...passed 00:10:27.762 Test: blockdev writev readv size > 128k ...passed 00:10:27.762 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:27.762 Test: blockdev comparev and writev ...[2024-11-06 07:46:50.237293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c200a000 len:0x1000 00:10:27.762 [2024-11-06 07:46:50.237355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:27.762 passed 00:10:27.762 Test: blockdev nvme passthru rw ...passed 00:10:27.762 Test: blockdev nvme passthru vendor specific ...passed 00:10:27.762 Test: blockdev nvme admin passthru ...[2024-11-06 07:46:50.238221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:27.762 [2024-11-06 07:46:50.238280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:27.762 passed 00:10:27.762 Test: blockdev copy ...passed 00:10:27.762 Suite: bdevio tests on: Nvme2n3 00:10:27.762 Test: blockdev write read block ...passed 00:10:27.762 Test: blockdev write zeroes read block ...passed 00:10:27.762 Test: blockdev write zeroes read no split ...passed 00:10:27.762 Test: blockdev write zeroes read split ...passed 00:10:27.762 Test: blockdev write zeroes read split partial ...passed 00:10:27.762 Test: blockdev reset ...[2024-11-06 07:46:50.323663] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:27.762 [2024-11-06 07:46:50.328066] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:27.762 passed 00:10:27.762 Test: blockdev write read 8 blocks ...passed 00:10:27.762 Test: blockdev write read size > 128k ...passed 00:10:27.762 Test: blockdev write read invalid size ...passed 00:10:27.762 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:27.762 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:27.762 Test: blockdev write read max offset ...passed 00:10:27.762 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:27.762 Test: blockdev writev readv 8 blocks ...passed 00:10:27.762 Test: blockdev writev readv 30 x 1block ...passed 00:10:27.762 Test: blockdev writev readv block ...passed 00:10:27.762 Test: blockdev writev readv size > 128k ...passed 00:10:27.762 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:27.762 Test: blockdev comparev and writev ...[2024-11-06 07:46:50.335872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a5206000 len:0x1000 00:10:27.762 [2024-11-06 07:46:50.335940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:27.762 passed 00:10:27.762 Test: blockdev nvme passthru rw ...passed 00:10:27.762 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:46:50.336899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:27.762 [2024-11-06 07:46:50.336945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:27.762 passed 00:10:27.762 Test: blockdev nvme admin passthru ...passed 00:10:27.762 Test: blockdev copy ...passed 00:10:27.762 Suite: bdevio tests on: Nvme2n2 00:10:27.762 Test: blockdev write read block ...passed 00:10:27.762 Test: blockdev write zeroes read block ...passed 00:10:27.762 Test: blockdev write zeroes read no split ...passed 00:10:27.762 Test: blockdev write zeroes read split ...passed 00:10:28.022 Test: blockdev write zeroes read split partial ...passed 00:10:28.022 Test: blockdev reset ...[2024-11-06 07:46:50.405472] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:28.022 [2024-11-06 07:46:50.409897] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:28.022 passed 00:10:28.022 Test: blockdev write read 8 blocks ...passed 00:10:28.022 Test: blockdev write read size > 128k ...passed 00:10:28.022 Test: blockdev write read invalid size ...passed 00:10:28.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.022 Test: blockdev write read max offset ...passed 00:10:28.022 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.022 Test: blockdev writev readv 8 blocks ...passed 00:10:28.022 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.022 Test: blockdev writev readv block ...passed 00:10:28.022 Test: blockdev writev readv size > 128k ...passed 00:10:28.022 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.022 Test: blockdev comparev and writev ...[2024-11-06 07:46:50.417373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d203c000 len:0x1000 00:10:28.022 [2024-11-06 07:46:50.417441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:28.022 passed 00:10:28.022 Test: blockdev nvme passthru rw ...passed 00:10:28.023 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:46:50.418243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:28.023 [2024-11-06 07:46:50.418309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:28.023 passed 00:10:28.023 Test: blockdev nvme admin passthru ...passed 00:10:28.023 Test: blockdev copy ...passed 00:10:28.023 Suite: bdevio tests on: Nvme2n1 00:10:28.023 Test: blockdev write read block ...passed 00:10:28.023 Test: blockdev write zeroes read block ...passed 00:10:28.023 Test: blockdev write zeroes read no split ...passed 00:10:28.023 Test: blockdev write zeroes read split ...passed 00:10:28.023 Test: blockdev write zeroes read split partial ...passed 00:10:28.023 Test: blockdev reset ...[2024-11-06 07:46:50.486763] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:28.023 [2024-11-06 07:46:50.491274] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:28.023 passed 00:10:28.023 Test: blockdev write read 8 blocks ...passed 00:10:28.023 Test: blockdev write read size > 128k ...passed 00:10:28.023 Test: blockdev write read invalid size ...passed 00:10:28.023 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.023 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.023 Test: blockdev write read max offset ...passed 00:10:28.023 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.023 Test: blockdev writev readv 8 blocks ...passed 00:10:28.023 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.023 Test: blockdev writev readv block ...passed 00:10:28.023 Test: blockdev writev readv size > 128k ...passed 00:10:28.023 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.023 Test: blockdev comparev and writev ...[2024-11-06 07:46:50.500819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2038000 len:0x1000 00:10:28.023 passed 00:10:28.023 Test: blockdev nvme passthru rw ...[2024-11-06 07:46:50.500948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:28.023 passed 00:10:28.023 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:46:50.502027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:28.023 [2024-11-06 07:46:50.502118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:28.023 passed 00:10:28.023 Test: blockdev nvme admin passthru ...passed 00:10:28.023 Test: blockdev copy ...passed 00:10:28.023 Suite: bdevio tests on: Nvme1n1 00:10:28.023 Test: blockdev write read block ...passed 00:10:28.023 Test: blockdev write zeroes read block ...passed 00:10:28.023 Test: blockdev write zeroes read no split ...passed 00:10:28.023 Test: blockdev write zeroes read split ...passed 00:10:28.023 Test: blockdev write zeroes read split partial ...passed 00:10:28.023 Test: blockdev reset ...[2024-11-06 07:46:50.579329] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:28.023 [2024-11-06 07:46:50.583651] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:28.023 passed 00:10:28.023 Test: blockdev write read 8 blocks ...passed 00:10:28.023 Test: blockdev write read size > 128k ...passed 00:10:28.023 Test: blockdev write read invalid size ...passed 00:10:28.023 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.023 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.023 Test: blockdev write read max offset ...passed 00:10:28.023 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.023 Test: blockdev writev readv 8 blocks ...passed 00:10:28.023 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.023 Test: blockdev writev readv block ...passed 00:10:28.023 Test: blockdev writev readv size > 128k ...passed 00:10:28.023 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.023 Test: blockdev comparev and writev ...[2024-11-06 07:46:50.592285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2034000 len:0x1000 00:10:28.023 [2024-11-06 07:46:50.592365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:28.023 passed 00:10:28.023 Test: blockdev nvme passthru rw ...passed 00:10:28.023 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:46:50.593359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:28.023 [2024-11-06 07:46:50.593414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:28.023 passed 00:10:28.023 Test: blockdev nvme admin passthru ...passed 00:10:28.023 Test: blockdev copy ...passed 00:10:28.023 Suite: bdevio tests on: Nvme0n1 00:10:28.023 Test: blockdev write read block ...passed 00:10:28.023 Test: blockdev write zeroes read block ...passed 00:10:28.023 Test: blockdev write zeroes read no split ...passed 00:10:28.023 Test: blockdev write zeroes read split ...passed 00:10:28.281 Test: blockdev write zeroes read split partial ...passed 00:10:28.281 Test: blockdev reset ...[2024-11-06 07:46:50.678728] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:28.281 [2024-11-06 07:46:50.683103] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:28.281 passed 00:10:28.281 Test: blockdev write read 8 blocks ...passed 00:10:28.281 Test: blockdev write read size > 128k ...passed 00:10:28.281 Test: blockdev write read invalid size ...passed 00:10:28.281 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.281 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.281 Test: blockdev write read max offset ...passed 00:10:28.281 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.281 Test: blockdev writev readv 8 blocks ...passed 00:10:28.281 Test: blockdev writev readv 30 x 1block ...passed 00:10:28.281 Test: blockdev writev readv block ...passed 00:10:28.281 Test: blockdev writev readv size > 128k ...passed 00:10:28.281 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:28.281 Test: blockdev comparev and writev ...[2024-11-06 07:46:50.690854] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:28.281 separate metadata which is not supported yet. 00:10:28.281 passed 00:10:28.281 Test: blockdev nvme passthru rw ...passed 00:10:28.281 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:46:50.691560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:28.281 [2024-11-06 07:46:50.691638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:28.281 passed 00:10:28.281 Test: blockdev nvme admin passthru ...passed 00:10:28.281 Test: blockdev copy ...passed 00:10:28.281 00:10:28.281 Run Summary: Type Total Ran Passed Failed Inactive 00:10:28.281 suites 6 6 n/a 0 0 00:10:28.281 tests 138 138 138 0 0 00:10:28.281 asserts 893 893 893 0 n/a 00:10:28.281 00:10:28.281 Elapsed time = 1.487 seconds 00:10:28.281 0 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61368 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61368 ']' 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61368 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61368 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.281 killing process with pid 61368 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61368' 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61368 00:10:28.281 07:46:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61368 00:10:29.216 07:46:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:29.216 00:10:29.216 real 0m2.893s 00:10:29.216 user 0m7.467s 00:10:29.216 sys 0m0.443s 00:10:29.216 07:46:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.216 07:46:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:29.216 ************************************ 00:10:29.216 END TEST bdev_bounds 00:10:29.216 ************************************ 00:10:29.216 07:46:51 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:29.216 07:46:51 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:29.216 07:46:51 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.216 07:46:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:29.216 ************************************ 00:10:29.216 START TEST bdev_nbd 00:10:29.216 ************************************ 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61432 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61432 /var/tmp/spdk-nbd.sock 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61432 ']' 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:29.216 07:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:29.217 07:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.217 07:46:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:29.475 [2024-11-06 07:46:51.928698] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:29.475 [2024-11-06 07:46:51.928890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.757 [2024-11-06 07:46:52.118489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.757 [2024-11-06 07:46:52.277545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:30.743 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.001 1+0 records in 00:10:31.001 1+0 records out 00:10:31.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432451 s, 9.5 MB/s 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:31.001 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.259 1+0 records in 00:10:31.259 1+0 records out 00:10:31.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584934 s, 7.0 MB/s 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:31.259 07:46:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:31.517 1+0 records in 00:10:31.517 1+0 records out 00:10:31.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610454 s, 6.7 MB/s 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:31.517 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.083 1+0 records in 00:10:32.083 1+0 records out 00:10:32.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454779 s, 9.0 MB/s 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:32.083 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.340 1+0 records in 00:10:32.340 1+0 records out 00:10:32.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625974 s, 6.5 MB/s 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:32.340 07:46:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:32.598 1+0 records in 00:10:32.598 1+0 records out 00:10:32.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000881888 s, 4.6 MB/s 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:32.598 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd0", 00:10:32.856 "bdev_name": "Nvme0n1" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd1", 00:10:32.856 "bdev_name": "Nvme1n1" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd2", 00:10:32.856 "bdev_name": "Nvme2n1" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd3", 00:10:32.856 "bdev_name": "Nvme2n2" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd4", 00:10:32.856 "bdev_name": "Nvme2n3" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd5", 00:10:32.856 "bdev_name": "Nvme3n1" 00:10:32.856 } 00:10:32.856 ]' 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd0", 00:10:32.856 "bdev_name": "Nvme0n1" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd1", 00:10:32.856 "bdev_name": "Nvme1n1" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd2", 00:10:32.856 "bdev_name": "Nvme2n1" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd3", 00:10:32.856 "bdev_name": "Nvme2n2" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd4", 00:10:32.856 "bdev_name": "Nvme2n3" 00:10:32.856 }, 00:10:32.856 { 00:10:32.856 "nbd_device": "/dev/nbd5", 00:10:32.856 "bdev_name": "Nvme3n1" 00:10:32.856 } 00:10:32.856 ]' 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.856 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.114 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.373 07:46:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:33.632 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:33.632 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:33.632 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:33.632 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:33.632 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:33.632 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:33.890 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:33.890 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:33.890 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.890 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.149 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.416 07:46:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.674 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:34.933 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:35.191 /dev/nbd0 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.191 1+0 records in 00:10:35.191 1+0 records out 00:10:35.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847495 s, 4.8 MB/s 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:35.191 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:35.450 /dev/nbd1 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.450 1+0 records in 00:10:35.450 1+0 records out 00:10:35.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833915 s, 4.9 MB/s 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:35.450 07:46:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:35.708 /dev/nbd10 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.708 1+0 records in 00:10:35.708 1+0 records out 00:10:35.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516291 s, 7.9 MB/s 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:35.708 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:36.275 /dev/nbd11 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.276 1+0 records in 00:10:36.276 1+0 records out 00:10:36.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642526 s, 6.4 MB/s 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:36.276 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:36.543 /dev/nbd12 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.543 1+0 records in 00:10:36.543 1+0 records out 00:10:36.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00077027 s, 5.3 MB/s 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:36.543 07:46:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:36.802 /dev/nbd13 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:36.802 1+0 records in 00:10:36.802 1+0 records out 00:10:36.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671154 s, 6.1 MB/s 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.802 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:37.061 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd0", 00:10:37.061 "bdev_name": "Nvme0n1" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd1", 00:10:37.061 "bdev_name": "Nvme1n1" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd10", 00:10:37.061 "bdev_name": "Nvme2n1" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd11", 00:10:37.061 "bdev_name": "Nvme2n2" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd12", 00:10:37.061 "bdev_name": "Nvme2n3" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd13", 00:10:37.061 "bdev_name": "Nvme3n1" 00:10:37.061 } 00:10:37.061 ]' 00:10:37.061 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:37.061 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd0", 00:10:37.061 "bdev_name": "Nvme0n1" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd1", 00:10:37.061 "bdev_name": "Nvme1n1" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd10", 00:10:37.061 "bdev_name": "Nvme2n1" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd11", 00:10:37.061 "bdev_name": "Nvme2n2" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd12", 00:10:37.061 "bdev_name": "Nvme2n3" 00:10:37.061 }, 00:10:37.061 { 00:10:37.061 "nbd_device": "/dev/nbd13", 00:10:37.061 "bdev_name": "Nvme3n1" 00:10:37.061 } 00:10:37.061 ]' 00:10:37.061 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:37.061 /dev/nbd1 00:10:37.062 /dev/nbd10 00:10:37.062 /dev/nbd11 00:10:37.062 /dev/nbd12 00:10:37.062 /dev/nbd13' 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:37.062 /dev/nbd1 00:10:37.062 /dev/nbd10 00:10:37.062 /dev/nbd11 00:10:37.062 /dev/nbd12 00:10:37.062 /dev/nbd13' 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:37.062 256+0 records in 00:10:37.062 256+0 records out 00:10:37.062 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605401 s, 173 MB/s 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.062 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:37.320 256+0 records in 00:10:37.320 256+0 records out 00:10:37.320 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144823 s, 7.2 MB/s 00:10:37.320 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.320 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:37.578 256+0 records in 00:10:37.578 256+0 records out 00:10:37.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138921 s, 7.5 MB/s 00:10:37.578 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.578 07:46:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:37.578 256+0 records in 00:10:37.578 256+0 records out 00:10:37.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157266 s, 6.7 MB/s 00:10:37.578 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.578 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:37.836 256+0 records in 00:10:37.836 256+0 records out 00:10:37.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146767 s, 7.1 MB/s 00:10:37.836 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.836 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:37.836 256+0 records in 00:10:37.836 256+0 records out 00:10:37.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132283 s, 7.9 MB/s 00:10:37.836 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.836 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:38.093 256+0 records in 00:10:38.093 256+0 records out 00:10:38.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158381 s, 6.6 MB/s 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:38.093 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:38.094 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:38.094 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.094 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:38.094 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:38.094 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:38.094 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.094 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.351 07:47:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.918 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.176 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.435 07:47:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.693 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.951 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:40.209 07:47:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:40.776 malloc_lvol_verify 00:10:40.776 07:47:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:40.776 f60a3ab3-0817-4261-9d56-b5b5ea323256 00:10:40.776 07:47:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:41.034 a4fa5e06-1214-4880-9b82-03c40e392181 00:10:41.292 07:47:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:41.550 /dev/nbd0 00:10:41.550 07:47:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:41.550 07:47:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:41.550 07:47:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:41.550 07:47:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:41.550 07:47:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:41.550 mke2fs 1.47.0 (5-Feb-2023) 00:10:41.550 Discarding device blocks: 0/4096 done 00:10:41.550 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:41.550 00:10:41.550 Allocating group tables: 0/1 done 00:10:41.550 Writing inode tables: 0/1 done 00:10:41.550 Creating journal (1024 blocks): done 00:10:41.550 Writing superblocks and filesystem accounting information: 0/1 done 00:10:41.550 00:10:41.550 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:41.550 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.550 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:41.550 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:41.550 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:41.550 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.550 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61432 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61432 ']' 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61432 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61432 00:10:41.809 killing process with pid 61432 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61432' 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61432 00:10:41.809 07:47:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61432 00:10:43.184 07:47:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:43.184 00:10:43.184 real 0m13.692s 00:10:43.184 user 0m19.899s 00:10:43.184 sys 0m4.255s 00:10:43.184 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.184 ************************************ 00:10:43.184 07:47:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:43.184 END TEST bdev_nbd 00:10:43.184 ************************************ 00:10:43.184 07:47:05 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:43.184 07:47:05 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:10:43.184 skipping fio tests on NVMe due to multi-ns failures. 00:10:43.184 07:47:05 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:43.184 07:47:05 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:43.184 07:47:05 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:43.184 07:47:05 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:43.184 07:47:05 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.184 07:47:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:43.184 ************************************ 00:10:43.184 START TEST bdev_verify 00:10:43.184 ************************************ 00:10:43.184 07:47:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:43.184 [2024-11-06 07:47:05.647562] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:43.184 [2024-11-06 07:47:05.647719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61844 ] 00:10:43.442 [2024-11-06 07:47:05.830335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.442 [2024-11-06 07:47:05.988347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.442 [2024-11-06 07:47:05.988355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.377 Running I/O for 5 seconds... 00:10:46.688 20800.00 IOPS, 81.25 MiB/s [2024-11-06T07:47:10.259Z] 19648.00 IOPS, 76.75 MiB/s [2024-11-06T07:47:11.194Z] 19285.33 IOPS, 75.33 MiB/s [2024-11-06T07:47:12.129Z] 18832.00 IOPS, 73.56 MiB/s [2024-11-06T07:47:12.129Z] 18969.60 IOPS, 74.10 MiB/s 00:10:49.500 Latency(us) 00:10:49.500 [2024-11-06T07:47:12.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.500 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x0 length 0xbd0bd 00:10:49.500 Nvme0n1 : 5.05 1571.50 6.14 0.00 0.00 81159.52 17396.83 81502.95 00:10:49.500 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:49.500 Nvme0n1 : 5.06 1544.29 6.03 0.00 0.00 82603.50 16443.58 109147.23 00:10:49.500 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x0 length 0xa0000 00:10:49.500 Nvme1n1 : 5.05 1571.01 6.14 0.00 0.00 81002.67 19184.17 77689.95 00:10:49.500 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0xa0000 length 0xa0000 00:10:49.500 Nvme1n1 : 5.06 1543.69 6.03 0.00 0.00 82446.82 19899.11 109623.85 00:10:49.500 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x0 length 0x80000 00:10:49.500 Nvme2n1 : 5.05 1570.53 6.13 0.00 0.00 80840.33 21090.68 77213.32 00:10:49.500 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x80000 length 0x80000 00:10:49.500 Nvme2n1 : 5.06 1543.10 6.03 0.00 0.00 82323.94 20375.74 109623.85 00:10:49.500 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x0 length 0x80000 00:10:49.500 Nvme2n2 : 5.07 1578.30 6.17 0.00 0.00 80288.21 4706.68 76260.07 00:10:49.500 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x80000 length 0x80000 00:10:49.500 Nvme2n2 : 5.06 1542.55 6.03 0.00 0.00 82174.25 18826.71 103904.35 00:10:49.500 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x0 length 0x80000 00:10:49.500 Nvme2n3 : 5.08 1586.43 6.20 0.00 0.00 79797.60 11558.17 75783.45 00:10:49.500 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x80000 length 0x80000 00:10:49.500 Nvme2n3 : 5.08 1550.14 6.06 0.00 0.00 81615.16 4974.78 100091.35 00:10:49.500 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x0 length 0x20000 00:10:49.500 Nvme3n1 : 5.08 1585.98 6.20 0.00 0.00 79645.61 10962.39 77213.32 00:10:49.500 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:49.500 Verification LBA range: start 0x20000 length 0x20000 00:10:49.500 Nvme3n1 : 5.09 1557.93 6.09 0.00 0.00 81122.02 10247.45 106764.10 00:10:49.500 [2024-11-06T07:47:12.129Z] =================================================================================================================== 00:10:49.500 [2024-11-06T07:47:12.129Z] Total : 18745.45 73.22 0.00 0.00 81240.50 4706.68 109623.85 00:10:50.930 00:10:50.930 real 0m7.644s 00:10:50.930 user 0m14.054s 00:10:50.930 sys 0m0.308s 00:10:50.930 07:47:13 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.930 07:47:13 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:50.930 ************************************ 00:10:50.930 END TEST bdev_verify 00:10:50.930 ************************************ 00:10:50.930 07:47:13 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:50.930 07:47:13 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:50.930 07:47:13 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.930 07:47:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:50.930 ************************************ 00:10:50.930 START TEST bdev_verify_big_io 00:10:50.930 ************************************ 00:10:50.930 07:47:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:50.930 [2024-11-06 07:47:13.362201] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:50.930 [2024-11-06 07:47:13.362422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:10:50.930 [2024-11-06 07:47:13.552093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:51.189 [2024-11-06 07:47:13.715940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.189 [2024-11-06 07:47:13.715946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.133 Running I/O for 5 seconds... 00:10:56.349 1744.00 IOPS, 109.00 MiB/s [2024-11-06T07:47:20.354Z] 2364.00 IOPS, 147.75 MiB/s [2024-11-06T07:47:20.613Z] 2280.67 IOPS, 142.54 MiB/s 00:10:57.984 Latency(us) 00:10:57.984 [2024-11-06T07:47:20.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.984 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x0 length 0xbd0b 00:10:57.984 Nvme0n1 : 5.70 123.43 7.71 0.00 0.00 994880.49 19779.96 915120.87 00:10:57.984 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:57.984 Nvme0n1 : 5.75 122.42 7.65 0.00 0.00 1013164.05 37653.41 941811.90 00:10:57.984 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x0 length 0xa000 00:10:57.984 Nvme1n1 : 5.77 129.89 8.12 0.00 0.00 935858.30 16920.20 861738.82 00:10:57.984 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0xa000 length 0xa000 00:10:57.984 Nvme1n1 : 5.78 129.31 8.08 0.00 0.00 948952.68 20733.21 964689.92 00:10:57.984 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x0 length 0x8000 00:10:57.984 Nvme2n1 : 5.77 129.68 8.10 0.00 0.00 911920.06 17635.14 892242.85 00:10:57.984 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x8000 length 0x8000 00:10:57.984 Nvme2n1 : 5.79 129.26 8.08 0.00 0.00 923220.64 21090.68 983754.94 00:10:57.984 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x0 length 0x8000 00:10:57.984 Nvme2n2 : 5.77 129.81 8.11 0.00 0.00 885674.69 17873.45 903681.86 00:10:57.984 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x8000 length 0x8000 00:10:57.984 Nvme2n2 : 5.79 128.88 8.05 0.00 0.00 900064.84 20971.52 991380.95 00:10:57.984 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x0 length 0x8000 00:10:57.984 Nvme2n3 : 5.77 133.03 8.31 0.00 0.00 843285.10 42657.98 922746.88 00:10:57.984 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x8000 length 0x8000 00:10:57.984 Nvme2n3 : 5.79 128.67 8.04 0.00 0.00 875865.89 20971.52 1014258.97 00:10:57.984 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x0 length 0x2000 00:10:57.984 Nvme3n1 : 5.81 150.38 9.40 0.00 0.00 727479.66 8340.95 945624.90 00:10:57.984 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:57.984 Verification LBA range: start 0x2000 length 0x2000 00:10:57.984 Nvme3n1 : 5.80 136.27 8.52 0.00 0.00 805678.82 3008.70 1037136.99 00:10:57.984 [2024-11-06T07:47:20.613Z] =================================================================================================================== 00:10:57.984 [2024-11-06T07:47:20.613Z] Total : 1571.03 98.19 0.00 0.00 893244.61 3008.70 1037136.99 00:10:59.885 00:10:59.885 real 0m8.923s 00:10:59.885 user 0m16.455s 00:10:59.885 sys 0m0.377s 00:10:59.885 07:47:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.885 07:47:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:59.885 ************************************ 00:10:59.885 END TEST bdev_verify_big_io 00:10:59.885 ************************************ 00:10:59.885 07:47:22 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.885 07:47:22 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:59.885 07:47:22 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.885 07:47:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:59.885 ************************************ 00:10:59.885 START TEST bdev_write_zeroes 00:10:59.885 ************************************ 00:10:59.885 07:47:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.885 [2024-11-06 07:47:22.340327] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:10:59.885 [2024-11-06 07:47:22.340533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62062 ] 00:11:00.144 [2024-11-06 07:47:22.532063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.144 [2024-11-06 07:47:22.670513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.080 Running I/O for 1 seconds... 00:11:02.014 41639.00 IOPS, 162.65 MiB/s 00:11:02.015 Latency(us) 00:11:02.015 [2024-11-06T07:47:24.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:02.015 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.015 Nvme0n1 : 1.21 5813.26 22.71 0.00 0.00 21011.56 7268.54 385113.37 00:11:02.015 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.015 Nvme1n1 : 1.08 6568.31 25.66 0.00 0.00 19395.31 11260.28 240219.23 00:11:02.015 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.015 Nvme2n1 : 1.08 6557.13 25.61 0.00 0.00 19348.77 10724.07 240219.23 00:11:02.015 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.015 Nvme2n2 : 1.09 6547.18 25.57 0.00 0.00 19297.83 7685.59 240219.23 00:11:02.015 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.015 Nvme2n3 : 1.09 6537.39 25.54 0.00 0.00 19299.74 6523.81 241172.48 00:11:02.015 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:02.015 Nvme3n1 : 1.08 6460.17 25.24 0.00 0.00 19539.07 12213.53 242125.73 00:11:02.015 [2024-11-06T07:47:24.644Z] =================================================================================================================== 00:11:02.015 [2024-11-06T07:47:24.644Z] Total : 38483.46 150.33 0.00 0.00 19646.18 6523.81 385113.37 00:11:03.456 00:11:03.456 real 0m3.481s 00:11:03.456 user 0m3.033s 00:11:03.456 sys 0m0.323s 00:11:03.456 07:47:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.456 07:47:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:03.456 ************************************ 00:11:03.456 END TEST bdev_write_zeroes 00:11:03.456 ************************************ 00:11:03.456 07:47:25 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.456 07:47:25 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:03.456 07:47:25 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.456 07:47:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:03.456 ************************************ 00:11:03.456 START TEST bdev_json_nonenclosed 00:11:03.456 ************************************ 00:11:03.456 07:47:25 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.456 [2024-11-06 07:47:25.879766] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:03.456 [2024-11-06 07:47:25.879964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62121 ] 00:11:03.456 [2024-11-06 07:47:26.076005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.714 [2024-11-06 07:47:26.208931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.714 [2024-11-06 07:47:26.209053] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:03.714 [2024-11-06 07:47:26.209082] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:03.714 [2024-11-06 07:47:26.209097] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:03.973 00:11:03.973 real 0m0.751s 00:11:03.973 user 0m0.484s 00:11:03.973 sys 0m0.159s 00:11:03.973 07:47:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.973 07:47:26 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:03.973 ************************************ 00:11:03.973 END TEST bdev_json_nonenclosed 00:11:03.973 ************************************ 00:11:03.973 07:47:26 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.973 07:47:26 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:03.973 07:47:26 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.973 07:47:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:03.973 ************************************ 00:11:03.973 START TEST bdev_json_nonarray 00:11:03.973 ************************************ 00:11:03.973 07:47:26 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.231 [2024-11-06 07:47:26.702039] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:04.231 [2024-11-06 07:47:26.702244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62152 ] 00:11:04.490 [2024-11-06 07:47:26.885906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.490 [2024-11-06 07:47:27.018092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.490 [2024-11-06 07:47:27.018219] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:04.490 [2024-11-06 07:47:27.018273] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:04.490 [2024-11-06 07:47:27.018293] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.748 00:11:04.748 real 0m0.721s 00:11:04.748 user 0m0.448s 00:11:04.748 sys 0m0.167s 00:11:04.748 07:47:27 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.748 07:47:27 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:04.748 ************************************ 00:11:04.748 END TEST bdev_json_nonarray 00:11:04.748 ************************************ 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:11:04.748 07:47:27 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:11:04.748 00:11:04.748 real 0m45.298s 00:11:04.748 user 1m8.419s 00:11:04.748 sys 0m7.404s 00:11:04.748 07:47:27 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.748 07:47:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:11:04.748 ************************************ 00:11:04.748 END TEST blockdev_nvme 00:11:04.748 ************************************ 00:11:05.007 07:47:27 -- spdk/autotest.sh@209 -- # uname -s 00:11:05.007 07:47:27 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:11:05.007 07:47:27 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:05.007 07:47:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.007 07:47:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.007 07:47:27 -- common/autotest_common.sh@10 -- # set +x 00:11:05.007 ************************************ 00:11:05.007 START TEST blockdev_nvme_gpt 00:11:05.007 ************************************ 00:11:05.007 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:11:05.007 * Looking for test storage... 00:11:05.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:05.007 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:05.007 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # lcov --version 00:11:05.007 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:05.007 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.007 07:47:27 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:11:05.007 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.007 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:05.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.007 --rc genhtml_branch_coverage=1 00:11:05.007 --rc genhtml_function_coverage=1 00:11:05.007 --rc genhtml_legend=1 00:11:05.007 --rc geninfo_all_blocks=1 00:11:05.007 --rc geninfo_unexecuted_blocks=1 00:11:05.007 00:11:05.007 ' 00:11:05.007 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:05.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.007 --rc genhtml_branch_coverage=1 00:11:05.008 --rc genhtml_function_coverage=1 00:11:05.008 --rc genhtml_legend=1 00:11:05.008 --rc geninfo_all_blocks=1 00:11:05.008 --rc geninfo_unexecuted_blocks=1 00:11:05.008 00:11:05.008 ' 00:11:05.008 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:05.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.008 --rc genhtml_branch_coverage=1 00:11:05.008 --rc genhtml_function_coverage=1 00:11:05.008 --rc genhtml_legend=1 00:11:05.008 --rc geninfo_all_blocks=1 00:11:05.008 --rc geninfo_unexecuted_blocks=1 00:11:05.008 00:11:05.008 ' 00:11:05.008 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:05.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.008 --rc genhtml_branch_coverage=1 00:11:05.008 --rc genhtml_function_coverage=1 00:11:05.008 --rc genhtml_legend=1 00:11:05.008 --rc geninfo_all_blocks=1 00:11:05.008 --rc geninfo_unexecuted_blocks=1 00:11:05.008 00:11:05.008 ' 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62236 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62236 00:11:05.008 07:47:27 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:05.008 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 62236 ']' 00:11:05.008 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.008 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.008 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.008 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.008 07:47:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:05.266 [2024-11-06 07:47:27.695775] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:05.266 [2024-11-06 07:47:27.695936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62236 ] 00:11:05.266 [2024-11-06 07:47:27.871225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.525 [2024-11-06 07:47:28.001497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.460 07:47:28 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.460 07:47:28 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:11:06.460 07:47:28 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:11:06.460 07:47:28 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:11:06.460 07:47:28 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:06.724 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.993 Waiting for block devices as requested 00:11:06.993 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.993 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.993 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:07.249 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.516 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1654 -- # local nvme bdf 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:11:12.516 BYT; 00:11:12.516 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:11:12.516 BYT; 00:11:12.516 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:12.516 07:47:34 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:11:12.516 07:47:34 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:11:13.451 The operation has completed successfully. 00:11:13.451 07:47:35 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:11:14.385 The operation has completed successfully. 00:11:14.385 07:47:36 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:14.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.566 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.566 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.566 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.566 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.566 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:11:15.566 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.566 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:15.824 [] 00:11:15.824 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.825 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:11:15.825 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:11:15.825 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:11:15.825 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:15.825 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:11:15.825 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.825 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.082 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.082 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.083 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:11:16.083 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.083 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.083 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.083 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:11:16.083 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.083 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:16.083 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:11:16.341 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.341 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:11:16.341 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:11:16.342 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5cf84099-2d51-4028-9dd3-441d133efb3d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5cf84099-2d51-4028-9dd3-441d133efb3d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9fc6f671-2e7d-492d-8a2d-e0971db25141"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9fc6f671-2e7d-492d-8a2d-e0971db25141",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2db36a57-cc4f-4e24-91a1-b49c71d8e616"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2db36a57-cc4f-4e24-91a1-b49c71d8e616",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "67a8892c-4887-4717-bbd9-d01ad72b23b3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67a8892c-4887-4717-bbd9-d01ad72b23b3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8febf29b-5ed1-4ad3-acdd-669965479128"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8febf29b-5ed1-4ad3-acdd-669965479128",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:11:16.342 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:11:16.342 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:11:16.342 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:11:16.342 07:47:38 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62236 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 62236 ']' 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 62236 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62236 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.342 killing process with pid 62236 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62236' 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 62236 00:11:16.342 07:47:38 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 62236 00:11:18.872 07:47:41 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:18.872 07:47:41 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:18.872 07:47:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:18.872 07:47:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.872 07:47:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:18.872 ************************************ 00:11:18.872 START TEST bdev_hello_world 00:11:18.872 ************************************ 00:11:18.872 07:47:41 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:18.872 [2024-11-06 07:47:41.205943] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:18.872 [2024-11-06 07:47:41.206121] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62869 ] 00:11:18.872 [2024-11-06 07:47:41.401846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.130 [2024-11-06 07:47:41.565170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.696 [2024-11-06 07:47:42.243780] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:19.696 [2024-11-06 07:47:42.243872] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:19.696 [2024-11-06 07:47:42.243918] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:19.696 [2024-11-06 07:47:42.247345] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:19.696 [2024-11-06 07:47:42.247973] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:19.696 [2024-11-06 07:47:42.248020] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:19.696 [2024-11-06 07:47:42.248276] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:19.696 00:11:19.696 [2024-11-06 07:47:42.248324] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:21.073 00:11:21.073 real 0m2.220s 00:11:21.073 user 0m1.808s 00:11:21.073 sys 0m0.295s 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:21.073 ************************************ 00:11:21.073 END TEST bdev_hello_world 00:11:21.073 ************************************ 00:11:21.073 07:47:43 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:11:21.073 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.073 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.073 07:47:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:21.073 ************************************ 00:11:21.073 START TEST bdev_bounds 00:11:21.073 ************************************ 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62915 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:21.073 Process bdevio pid: 62915 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62915' 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62915 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 62915 ']' 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.073 07:47:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:21.073 [2024-11-06 07:47:43.478966] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:21.073 [2024-11-06 07:47:43.479195] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62915 ] 00:11:21.073 [2024-11-06 07:47:43.672912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.331 [2024-11-06 07:47:43.837628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.331 [2024-11-06 07:47:43.837731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.331 [2024-11-06 07:47:43.837735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.266 07:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.266 07:47:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:11:22.266 07:47:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:22.266 I/O targets: 00:11:22.266 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:22.266 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:22.266 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:22.266 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:22.266 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:22.266 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:22.266 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:22.266 00:11:22.266 00:11:22.266 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.266 http://cunit.sourceforge.net/ 00:11:22.266 00:11:22.266 00:11:22.266 Suite: bdevio tests on: Nvme3n1 00:11:22.266 Test: blockdev write read block ...passed 00:11:22.266 Test: blockdev write zeroes read block ...passed 00:11:22.266 Test: blockdev write zeroes read no split ...passed 00:11:22.266 Test: blockdev write zeroes read split ...passed 00:11:22.266 Test: blockdev write zeroes read split partial ...passed 00:11:22.266 Test: blockdev reset ...[2024-11-06 07:47:44.722353] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:22.266 [2024-11-06 07:47:44.726229] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:22.266 passed 00:11:22.266 Test: blockdev write read 8 blocks ...passed 00:11:22.266 Test: blockdev write read size > 128k ...passed 00:11:22.266 Test: blockdev write read invalid size ...passed 00:11:22.266 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.266 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.266 Test: blockdev write read max offset ...passed 00:11:22.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.266 Test: blockdev writev readv 8 blocks ...passed 00:11:22.266 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.266 Test: blockdev writev readv block ...passed 00:11:22.266 Test: blockdev writev readv size > 128k ...passed 00:11:22.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.266 Test: blockdev comparev and writev ...[2024-11-06 07:47:44.734042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf804000 len:0x1000 00:11:22.266 [2024-11-06 07:47:44.734103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:22.266 passed 00:11:22.266 Test: blockdev nvme passthru rw ...passed 00:11:22.266 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:47:44.734974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:22.266 [2024-11-06 07:47:44.735019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:22.266 passed 00:11:22.266 Test: blockdev nvme admin passthru ...passed 00:11:22.266 Test: blockdev copy ...passed 00:11:22.266 Suite: bdevio tests on: Nvme2n3 00:11:22.266 Test: blockdev write read block ...passed 00:11:22.266 Test: blockdev write zeroes read block ...passed 00:11:22.266 Test: blockdev write zeroes read no split ...passed 00:11:22.266 Test: blockdev write zeroes read split ...passed 00:11:22.266 Test: blockdev write zeroes read split partial ...passed 00:11:22.266 Test: blockdev reset ...[2024-11-06 07:47:44.808555] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:22.266 [2024-11-06 07:47:44.812917] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:22.266 passed 00:11:22.266 Test: blockdev write read 8 blocks ...passed 00:11:22.266 Test: blockdev write read size > 128k ...passed 00:11:22.266 Test: blockdev write read invalid size ...passed 00:11:22.266 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.266 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.266 Test: blockdev write read max offset ...passed 00:11:22.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.266 Test: blockdev writev readv 8 blocks ...passed 00:11:22.266 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.266 Test: blockdev writev readv block ...passed 00:11:22.266 Test: blockdev writev readv size > 128k ...passed 00:11:22.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.266 Test: blockdev comparev and writev ...[2024-11-06 07:47:44.820278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf802000 len:0x1000 00:11:22.266 [2024-11-06 07:47:44.820338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:22.266 passed 00:11:22.266 Test: blockdev nvme passthru rw ...passed 00:11:22.266 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:47:44.821233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:22.266 [2024-11-06 07:47:44.821288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:22.266 passed 00:11:22.266 Test: blockdev nvme admin passthru ...passed 00:11:22.266 Test: blockdev copy ...passed 00:11:22.266 Suite: bdevio tests on: Nvme2n2 00:11:22.266 Test: blockdev write read block ...passed 00:11:22.266 Test: blockdev write zeroes read block ...passed 00:11:22.266 Test: blockdev write zeroes read no split ...passed 00:11:22.266 Test: blockdev write zeroes read split ...passed 00:11:22.525 Test: blockdev write zeroes read split partial ...passed 00:11:22.526 Test: blockdev reset ...[2024-11-06 07:47:44.905433] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:22.526 [2024-11-06 07:47:44.909758] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:22.526 passed 00:11:22.526 Test: blockdev write read 8 blocks ...passed 00:11:22.526 Test: blockdev write read size > 128k ...passed 00:11:22.526 Test: blockdev write read invalid size ...passed 00:11:22.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.526 Test: blockdev write read max offset ...passed 00:11:22.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.526 Test: blockdev writev readv 8 blocks ...passed 00:11:22.526 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.526 Test: blockdev writev readv block ...passed 00:11:22.526 Test: blockdev writev readv size > 128k ...passed 00:11:22.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.526 Test: blockdev comparev and writev ...[2024-11-06 07:47:44.919651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e38000 len:0x1000 00:11:22.526 [2024-11-06 07:47:44.919741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:22.526 passed 00:11:22.526 Test: blockdev nvme passthru rw ...passed 00:11:22.526 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:47:44.920636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:22.526 [2024-11-06 07:47:44.920680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:22.526 passed 00:11:22.526 Test: blockdev nvme admin passthru ...passed 00:11:22.526 Test: blockdev copy ...passed 00:11:22.526 Suite: bdevio tests on: Nvme2n1 00:11:22.526 Test: blockdev write read block ...passed 00:11:22.526 Test: blockdev write zeroes read block ...passed 00:11:22.526 Test: blockdev write zeroes read no split ...passed 00:11:22.526 Test: blockdev write zeroes read split ...passed 00:11:22.526 Test: blockdev write zeroes read split partial ...passed 00:11:22.526 Test: blockdev reset ...[2024-11-06 07:47:45.003028] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:22.526 [2024-11-06 07:47:45.007378] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:22.526 passed 00:11:22.526 Test: blockdev write read 8 blocks ...passed 00:11:22.526 Test: blockdev write read size > 128k ...passed 00:11:22.526 Test: blockdev write read invalid size ...passed 00:11:22.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.526 Test: blockdev write read max offset ...passed 00:11:22.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.526 Test: blockdev writev readv 8 blocks ...passed 00:11:22.526 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.526 Test: blockdev writev readv block ...passed 00:11:22.526 Test: blockdev writev readv size > 128k ...passed 00:11:22.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.526 Test: blockdev comparev and writev ...[2024-11-06 07:47:45.016234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3e34000 len:0x1000 00:11:22.526 [2024-11-06 07:47:45.016308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:22.526 passed 00:11:22.526 Test: blockdev nvme passthru rw ...passed 00:11:22.526 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:47:45.017197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:22.526 [2024-11-06 07:47:45.017240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:22.526 passed 00:11:22.526 Test: blockdev nvme admin passthru ...passed 00:11:22.526 Test: blockdev copy ...passed 00:11:22.526 Suite: bdevio tests on: Nvme1n1p2 00:11:22.526 Test: blockdev write read block ...passed 00:11:22.526 Test: blockdev write zeroes read block ...passed 00:11:22.526 Test: blockdev write zeroes read no split ...passed 00:11:22.526 Test: blockdev write zeroes read split ...passed 00:11:22.526 Test: blockdev write zeroes read split partial ...passed 00:11:22.526 Test: blockdev reset ...[2024-11-06 07:47:45.093288] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:22.526 [2024-11-06 07:47:45.097491] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:22.526 passed 00:11:22.526 Test: blockdev write read 8 blocks ...passed 00:11:22.526 Test: blockdev write read size > 128k ...passed 00:11:22.526 Test: blockdev write read invalid size ...passed 00:11:22.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.526 Test: blockdev write read max offset ...passed 00:11:22.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.526 Test: blockdev writev readv 8 blocks ...passed 00:11:22.526 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.526 Test: blockdev writev readv block ...passed 00:11:22.526 Test: blockdev writev readv size > 128k ...passed 00:11:22.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.526 Test: blockdev comparev and writev ...[2024-11-06 07:47:45.106735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d3e30000 len:0x1000 00:11:22.526 [2024-11-06 07:47:45.106798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:22.526 passed 00:11:22.526 Test: blockdev nvme passthru rw ...passed 00:11:22.526 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.526 Test: blockdev nvme admin passthru ...passed 00:11:22.526 Test: blockdev copy ...passed 00:11:22.526 Suite: bdevio tests on: Nvme1n1p1 00:11:22.526 Test: blockdev write read block ...passed 00:11:22.526 Test: blockdev write zeroes read block ...passed 00:11:22.526 Test: blockdev write zeroes read no split ...passed 00:11:22.526 Test: blockdev write zeroes read split ...passed 00:11:22.785 Test: blockdev write zeroes read split partial ...passed 00:11:22.785 Test: blockdev reset ...[2024-11-06 07:47:45.175964] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:22.785 [2024-11-06 07:47:45.179812] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:22.785 passed 00:11:22.785 Test: blockdev write read 8 blocks ...passed 00:11:22.785 Test: blockdev write read size > 128k ...passed 00:11:22.785 Test: blockdev write read invalid size ...passed 00:11:22.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.785 Test: blockdev write read max offset ...passed 00:11:22.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.785 Test: blockdev writev readv 8 blocks ...passed 00:11:22.785 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.785 Test: blockdev writev readv block ...passed 00:11:22.785 Test: blockdev writev readv size > 128k ...passed 00:11:22.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.785 Test: blockdev comparev and writev ...[2024-11-06 07:47:45.188553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c020e000 len:0x1000 00:11:22.785 [2024-11-06 07:47:45.188615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:22.785 passed 00:11:22.785 Test: blockdev nvme passthru rw ...passed 00:11:22.785 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.785 Test: blockdev nvme admin passthru ...passed 00:11:22.785 Test: blockdev copy ...passed 00:11:22.785 Suite: bdevio tests on: Nvme0n1 00:11:22.785 Test: blockdev write read block ...passed 00:11:22.785 Test: blockdev write zeroes read block ...passed 00:11:22.785 Test: blockdev write zeroes read no split ...passed 00:11:22.785 Test: blockdev write zeroes read split ...passed 00:11:22.785 Test: blockdev write zeroes read split partial ...passed 00:11:22.785 Test: blockdev reset ...[2024-11-06 07:47:45.258585] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:22.785 [2024-11-06 07:47:45.262366] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:22.785 passed 00:11:22.785 Test: blockdev write read 8 blocks ...passed 00:11:22.785 Test: blockdev write read size > 128k ...passed 00:11:22.785 Test: blockdev write read invalid size ...passed 00:11:22.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.785 Test: blockdev write read max offset ...passed 00:11:22.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.785 Test: blockdev writev readv 8 blocks ...passed 00:11:22.785 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.785 Test: blockdev writev readv block ...passed 00:11:22.785 Test: blockdev writev readv size > 128k ...passed 00:11:22.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.785 Test: blockdev comparev and writev ...[2024-11-06 07:47:45.269892] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:22.785 separate metadata which is not supported yet. 00:11:22.785 passed 00:11:22.785 Test: blockdev nvme passthru rw ...passed 00:11:22.785 Test: blockdev nvme passthru vendor specific ...[2024-11-06 07:47:45.270475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:11:22.785 [2024-11-06 07:47:45.270531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:22.785 passed 00:11:22.785 Test: blockdev nvme admin passthru ...passed 00:11:22.785 Test: blockdev copy ...passed 00:11:22.785 00:11:22.785 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.785 suites 7 7 n/a 0 0 00:11:22.785 tests 161 161 161 0 0 00:11:22.785 asserts 1025 1025 1025 0 n/a 00:11:22.785 00:11:22.785 Elapsed time = 1.675 seconds 00:11:22.785 0 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62915 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 62915 ']' 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 62915 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62915 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.785 killing process with pid 62915 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62915' 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 62915 00:11:22.785 07:47:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 62915 00:11:23.719 07:47:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:23.719 00:11:23.719 real 0m2.945s 00:11:23.719 user 0m7.555s 00:11:23.719 sys 0m0.456s 00:11:23.719 07:47:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.719 ************************************ 00:11:23.719 END TEST bdev_bounds 00:11:23.719 07:47:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:23.719 ************************************ 00:11:23.978 07:47:46 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:23.978 07:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:23.978 07:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.978 07:47:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:23.978 ************************************ 00:11:23.978 START TEST bdev_nbd 00:11:23.978 ************************************ 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62976 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62976 /var/tmp/spdk-nbd.sock 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 62976 ']' 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.978 07:47:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:23.978 [2024-11-06 07:47:46.481394] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:23.978 [2024-11-06 07:47:46.481598] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.237 [2024-11-06 07:47:46.669132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.237 [2024-11-06 07:47:46.803216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.220 1+0 records in 00:11:25.220 1+0 records out 00:11:25.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482962 s, 8.5 MB/s 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.220 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:25.479 07:47:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.738 1+0 records in 00:11:25.738 1+0 records out 00:11:25.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507558 s, 8.1 MB/s 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:25.738 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.996 1+0 records in 00:11:25.996 1+0 records out 00:11:25.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673778 s, 6.1 MB/s 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:25.996 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.255 1+0 records in 00:11:26.255 1+0 records out 00:11:26.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00156245 s, 2.6 MB/s 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:26.255 07:47:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.514 1+0 records in 00:11:26.514 1+0 records out 00:11:26.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621483 s, 6.6 MB/s 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:26.514 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.773 1+0 records in 00:11:26.773 1+0 records out 00:11:26.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662704 s, 6.2 MB/s 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:26.773 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.340 1+0 records in 00:11:27.340 1+0 records out 00:11:27.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106037 s, 3.9 MB/s 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:27.340 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:27.598 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd0", 00:11:27.598 "bdev_name": "Nvme0n1" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd1", 00:11:27.598 "bdev_name": "Nvme1n1p1" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd2", 00:11:27.598 "bdev_name": "Nvme1n1p2" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd3", 00:11:27.598 "bdev_name": "Nvme2n1" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd4", 00:11:27.598 "bdev_name": "Nvme2n2" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd5", 00:11:27.598 "bdev_name": "Nvme2n3" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd6", 00:11:27.598 "bdev_name": "Nvme3n1" 00:11:27.598 } 00:11:27.598 ]' 00:11:27.598 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:27.598 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:27.598 07:47:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd0", 00:11:27.598 "bdev_name": "Nvme0n1" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd1", 00:11:27.598 "bdev_name": "Nvme1n1p1" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd2", 00:11:27.598 "bdev_name": "Nvme1n1p2" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd3", 00:11:27.598 "bdev_name": "Nvme2n1" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd4", 00:11:27.598 "bdev_name": "Nvme2n2" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd5", 00:11:27.598 "bdev_name": "Nvme2n3" 00:11:27.598 }, 00:11:27.598 { 00:11:27.598 "nbd_device": "/dev/nbd6", 00:11:27.598 "bdev_name": "Nvme3n1" 00:11:27.598 } 00:11:27.598 ]' 00:11:27.598 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:27.598 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.598 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:27.598 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:27.598 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:27.598 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.598 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.861 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.135 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.393 07:47:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.652 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:28.910 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:29.168 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:29.168 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:29.168 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:29.169 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.169 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.169 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:29.169 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.169 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.169 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:29.169 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:29.427 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:29.427 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:29.427 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:29.427 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:29.427 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:29.427 07:47:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:29.427 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:29.427 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:29.427 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:29.427 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.427 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:29.685 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:29.685 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:29.685 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:29.943 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:29.944 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:30.202 /dev/nbd0 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.202 1+0 records in 00:11:30.202 1+0 records out 00:11:30.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588464 s, 7.0 MB/s 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:30.202 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:30.461 /dev/nbd1 00:11:30.461 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:30.461 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:30.461 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:30.461 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:30.461 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:30.461 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:30.461 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:30.461 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.462 1+0 records in 00:11:30.462 1+0 records out 00:11:30.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567117 s, 7.2 MB/s 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:30.462 07:47:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:30.719 /dev/nbd10 00:11:30.719 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:30.719 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:30.719 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:30.719 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:30.719 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.720 1+0 records in 00:11:30.720 1+0 records out 00:11:30.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449319 s, 9.1 MB/s 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:30.720 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:31.287 /dev/nbd11 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.287 1+0 records in 00:11:31.287 1+0 records out 00:11:31.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569336 s, 7.2 MB/s 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.287 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:31.287 /dev/nbd12 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.546 1+0 records in 00:11:31.546 1+0 records out 00:11:31.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00294571 s, 1.4 MB/s 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.546 07:47:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:31.805 /dev/nbd13 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.806 1+0 records in 00:11:31.806 1+0 records out 00:11:31.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690113 s, 5.9 MB/s 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:31.806 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:32.065 /dev/nbd14 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.065 1+0 records in 00:11:32.065 1+0 records out 00:11:32.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072758 s, 5.6 MB/s 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.065 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:32.632 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd0", 00:11:32.632 "bdev_name": "Nvme0n1" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd1", 00:11:32.632 "bdev_name": "Nvme1n1p1" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd10", 00:11:32.632 "bdev_name": "Nvme1n1p2" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd11", 00:11:32.632 "bdev_name": "Nvme2n1" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd12", 00:11:32.632 "bdev_name": "Nvme2n2" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd13", 00:11:32.632 "bdev_name": "Nvme2n3" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd14", 00:11:32.632 "bdev_name": "Nvme3n1" 00:11:32.632 } 00:11:32.632 ]' 00:11:32.632 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd0", 00:11:32.632 "bdev_name": "Nvme0n1" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd1", 00:11:32.632 "bdev_name": "Nvme1n1p1" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd10", 00:11:32.632 "bdev_name": "Nvme1n1p2" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd11", 00:11:32.632 "bdev_name": "Nvme2n1" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd12", 00:11:32.632 "bdev_name": "Nvme2n2" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd13", 00:11:32.632 "bdev_name": "Nvme2n3" 00:11:32.632 }, 00:11:32.632 { 00:11:32.632 "nbd_device": "/dev/nbd14", 00:11:32.632 "bdev_name": "Nvme3n1" 00:11:32.632 } 00:11:32.632 ]' 00:11:32.632 07:47:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:32.632 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:32.632 /dev/nbd1 00:11:32.632 /dev/nbd10 00:11:32.632 /dev/nbd11 00:11:32.632 /dev/nbd12 00:11:32.632 /dev/nbd13 00:11:32.632 /dev/nbd14' 00:11:32.632 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:32.632 /dev/nbd1 00:11:32.632 /dev/nbd10 00:11:32.632 /dev/nbd11 00:11:32.632 /dev/nbd12 00:11:32.633 /dev/nbd13 00:11:32.633 /dev/nbd14' 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:32.633 256+0 records in 00:11:32.633 256+0 records out 00:11:32.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571259 s, 184 MB/s 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:32.633 256+0 records in 00:11:32.633 256+0 records out 00:11:32.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170933 s, 6.1 MB/s 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.633 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:32.892 256+0 records in 00:11:32.892 256+0 records out 00:11:32.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191224 s, 5.5 MB/s 00:11:32.892 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.892 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:33.150 256+0 records in 00:11:33.150 256+0 records out 00:11:33.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191086 s, 5.5 MB/s 00:11:33.150 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.150 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:33.409 256+0 records in 00:11:33.409 256+0 records out 00:11:33.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188443 s, 5.6 MB/s 00:11:33.409 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.409 07:47:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:33.409 256+0 records in 00:11:33.409 256+0 records out 00:11:33.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185007 s, 5.7 MB/s 00:11:33.409 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.409 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:33.667 256+0 records in 00:11:33.667 256+0 records out 00:11:33.667 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176153 s, 6.0 MB/s 00:11:33.667 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:33.667 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:33.927 256+0 records in 00:11:33.927 256+0 records out 00:11:33.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149905 s, 7.0 MB/s 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.927 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.190 07:47:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.448 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:34.706 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.965 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.224 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.483 07:47:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.741 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.999 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:36.258 07:47:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:36.825 malloc_lvol_verify 00:11:36.825 07:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:36.825 174ae476-9384-4829-8497-5f8e38df2b03 00:11:37.085 07:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:37.085 f81044b0-66c5-4796-b4eb-af0a949b10a5 00:11:37.085 07:47:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:37.658 /dev/nbd0 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:37.658 mke2fs 1.47.0 (5-Feb-2023) 00:11:37.658 Discarding device blocks: 0/4096 done 00:11:37.658 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:37.658 00:11:37.658 Allocating group tables: 0/1 done 00:11:37.658 Writing inode tables: 0/1 done 00:11:37.658 Creating journal (1024 blocks): done 00:11:37.658 Writing superblocks and filesystem accounting information: 0/1 done 00:11:37.658 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.658 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62976 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 62976 ']' 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 62976 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.927 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62976 00:11:37.927 killing process with pid 62976 00:11:37.928 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.928 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.928 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62976' 00:11:37.928 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 62976 00:11:37.928 07:48:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 62976 00:11:38.864 ************************************ 00:11:38.864 END TEST bdev_nbd 00:11:38.864 ************************************ 00:11:38.864 07:48:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:38.864 00:11:38.864 real 0m15.073s 00:11:38.864 user 0m21.554s 00:11:38.864 sys 0m4.946s 00:11:38.864 07:48:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.864 07:48:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:38.864 07:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:11:38.864 07:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:11:38.864 07:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:11:38.864 skipping fio tests on NVMe due to multi-ns failures. 00:11:38.864 07:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:38.864 07:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:38.864 07:48:01 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:38.864 07:48:01 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:38.864 07:48:01 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.864 07:48:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:39.121 ************************************ 00:11:39.121 START TEST bdev_verify 00:11:39.121 ************************************ 00:11:39.121 07:48:01 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:39.121 [2024-11-06 07:48:01.604298] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:39.121 [2024-11-06 07:48:01.604469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63429 ] 00:11:39.379 [2024-11-06 07:48:01.791396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:39.379 [2024-11-06 07:48:01.924049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.379 [2024-11-06 07:48:01.924062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.317 Running I/O for 5 seconds... 00:11:42.627 19712.00 IOPS, 77.00 MiB/s [2024-11-06T07:48:06.191Z] 18464.00 IOPS, 72.12 MiB/s [2024-11-06T07:48:06.758Z] 17386.67 IOPS, 67.92 MiB/s [2024-11-06T07:48:08.134Z] 16576.00 IOPS, 64.75 MiB/s [2024-11-06T07:48:08.134Z] 16844.80 IOPS, 65.80 MiB/s 00:11:45.505 Latency(us) 00:11:45.505 [2024-11-06T07:48:08.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.505 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.505 Verification LBA range: start 0x0 length 0xbd0bd 00:11:45.505 Nvme0n1 : 5.06 1188.86 4.64 0.00 0.00 107099.32 24546.21 94848.47 00:11:45.505 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.505 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:45.505 Nvme0n1 : 5.10 1180.44 4.61 0.00 0.00 108127.39 23950.43 98184.84 00:11:45.505 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.505 Verification LBA range: start 0x0 length 0x4ff80 00:11:45.505 Nvme1n1p1 : 5.10 1193.21 4.66 0.00 0.00 106555.73 11141.12 87222.46 00:11:45.505 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.505 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:45.505 Nvme1n1p1 : 5.10 1180.00 4.61 0.00 0.00 107888.23 23235.49 89128.96 00:11:45.505 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.505 Verification LBA range: start 0x0 length 0x4ff7f 00:11:45.505 Nvme1n1p2 : 5.10 1192.41 4.66 0.00 0.00 106377.41 12034.79 85315.96 00:11:45.505 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.505 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:45.505 Nvme1n1p2 : 5.10 1178.86 4.60 0.00 0.00 107761.55 25499.46 88175.71 00:11:45.505 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.505 Verification LBA range: start 0x0 length 0x80000 00:11:45.505 Nvme2n1 : 5.10 1191.78 4.66 0.00 0.00 106190.78 13583.83 84362.71 00:11:45.505 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.505 Verification LBA range: start 0x80000 length 0x80000 00:11:45.505 Nvme2n1 : 5.11 1178.39 4.60 0.00 0.00 107553.15 25976.09 86269.21 00:11:45.506 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.506 Verification LBA range: start 0x0 length 0x80000 00:11:45.506 Nvme2n2 : 5.12 1201.02 4.69 0.00 0.00 105474.26 11200.70 87222.46 00:11:45.506 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.506 Verification LBA range: start 0x80000 length 0x80000 00:11:45.506 Nvme2n2 : 5.11 1177.94 4.60 0.00 0.00 107349.09 26333.56 86745.83 00:11:45.506 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.506 Verification LBA range: start 0x0 length 0x80000 00:11:45.506 Nvme2n3 : 5.12 1200.68 4.69 0.00 0.00 105249.28 10902.81 90082.21 00:11:45.506 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.506 Verification LBA range: start 0x80000 length 0x80000 00:11:45.506 Nvme2n3 : 5.11 1177.57 4.60 0.00 0.00 107135.62 23116.33 88175.71 00:11:45.506 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:45.506 Verification LBA range: start 0x0 length 0x20000 00:11:45.506 Nvme3n1 : 5.12 1200.32 4.69 0.00 0.00 105030.03 10843.23 91035.46 00:11:45.506 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:45.506 Verification LBA range: start 0x20000 length 0x20000 00:11:45.506 Nvme3n1 : 5.11 1177.20 4.60 0.00 0.00 106926.34 15073.28 92941.96 00:11:45.506 [2024-11-06T07:48:08.135Z] =================================================================================================================== 00:11:45.506 [2024-11-06T07:48:08.135Z] Total : 16618.68 64.92 0.00 0.00 106757.84 10843.23 98184.84 00:11:46.883 00:11:46.883 real 0m7.657s 00:11:46.883 user 0m14.036s 00:11:46.883 sys 0m0.360s 00:11:46.883 ************************************ 00:11:46.883 END TEST bdev_verify 00:11:46.883 ************************************ 00:11:46.883 07:48:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.883 07:48:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:46.883 07:48:09 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:46.883 07:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:11:46.883 07:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.883 07:48:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:46.883 ************************************ 00:11:46.883 START TEST bdev_verify_big_io 00:11:46.883 ************************************ 00:11:46.883 07:48:09 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:46.883 [2024-11-06 07:48:09.299982] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:46.883 [2024-11-06 07:48:09.300147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63531 ] 00:11:46.883 [2024-11-06 07:48:09.472639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:47.144 [2024-11-06 07:48:09.603452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.144 [2024-11-06 07:48:09.603462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.080 Running I/O for 5 seconds... 00:11:53.178 1306.00 IOPS, 81.62 MiB/s [2024-11-06T07:48:15.807Z] 3144.00 IOPS, 196.50 MiB/s [2024-11-06T07:48:16.745Z] 2593.33 IOPS, 162.08 MiB/s [2024-11-06T07:48:16.745Z] 2885.75 IOPS, 180.36 MiB/s 00:11:54.116 Latency(us) 00:11:54.116 [2024-11-06T07:48:16.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.116 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x0 length 0xbd0b 00:11:54.116 Nvme0n1 : 5.72 128.82 8.05 0.00 0.00 949209.54 22401.40 1082893.03 00:11:54.116 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:54.116 Nvme0n1 : 5.58 137.53 8.60 0.00 0.00 879779.53 29193.31 937998.89 00:11:54.116 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x0 length 0x4ff8 00:11:54.116 Nvme1n1p1 : 5.72 130.69 8.17 0.00 0.00 913645.44 108193.98 918933.88 00:11:54.116 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:54.116 Nvme1n1p1 : 5.66 140.63 8.79 0.00 0.00 844947.95 89128.96 888429.85 00:11:54.116 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x0 length 0x4ff7 00:11:54.116 Nvme1n1p2 : 5.73 125.38 7.84 0.00 0.00 934628.33 88175.71 1570957.50 00:11:54.116 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:54.116 Nvme1n1p2 : 5.66 146.54 9.16 0.00 0.00 807633.22 72923.69 896055.85 00:11:54.116 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x0 length 0x8000 00:11:54.116 Nvme2n1 : 5.79 128.60 8.04 0.00 0.00 889104.27 56480.12 1601461.53 00:11:54.116 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x8000 length 0x8000 00:11:54.116 Nvme2n1 : 5.76 151.81 9.49 0.00 0.00 764237.01 50998.92 785478.75 00:11:54.116 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x0 length 0x8000 00:11:54.116 Nvme2n2 : 5.84 134.24 8.39 0.00 0.00 832112.20 51475.55 1631965.56 00:11:54.116 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x8000 length 0x8000 00:11:54.116 Nvme2n2 : 5.80 154.93 9.68 0.00 0.00 730762.78 37891.72 964689.92 00:11:54.116 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x0 length 0x8000 00:11:54.116 Nvme2n3 : 5.88 144.21 9.01 0.00 0.00 756870.96 18469.24 1662469.59 00:11:54.116 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x8000 length 0x8000 00:11:54.116 Nvme2n3 : 5.81 159.31 9.96 0.00 0.00 698089.47 41943.04 976128.93 00:11:54.116 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x0 length 0x2000 00:11:54.116 Nvme3n1 : 5.95 174.70 10.92 0.00 0.00 611559.71 476.63 1692973.61 00:11:54.116 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:54.116 Verification LBA range: start 0x2000 length 0x2000 00:11:54.116 Nvme3n1 : 5.85 174.92 10.93 0.00 0.00 623988.16 4140.68 831234.79 00:11:54.116 [2024-11-06T07:48:16.745Z] =================================================================================================================== 00:11:54.116 [2024-11-06T07:48:16.745Z] Total : 2032.31 127.02 0.00 0.00 790280.01 476.63 1692973.61 00:11:56.081 00:11:56.081 real 0m9.050s 00:11:56.081 user 0m16.812s 00:11:56.081 sys 0m0.391s 00:11:56.081 07:48:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.081 ************************************ 00:11:56.081 END TEST bdev_verify_big_io 00:11:56.081 ************************************ 00:11:56.081 07:48:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:56.081 07:48:18 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.081 07:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:56.081 07:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.081 07:48:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:56.081 ************************************ 00:11:56.081 START TEST bdev_write_zeroes 00:11:56.081 ************************************ 00:11:56.081 07:48:18 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:56.081 [2024-11-06 07:48:18.422787] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:56.081 [2024-11-06 07:48:18.423016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63646 ] 00:11:56.081 [2024-11-06 07:48:18.611852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.339 [2024-11-06 07:48:18.744946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.905 Running I/O for 1 seconds... 00:11:58.277 55552.00 IOPS, 217.00 MiB/s 00:11:58.277 Latency(us) 00:11:58.277 [2024-11-06T07:48:20.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:58.277 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.277 Nvme0n1 : 1.03 7924.48 30.96 0.00 0.00 16110.16 14179.61 29074.15 00:11:58.277 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.277 Nvme1n1p1 : 1.03 7914.71 30.92 0.00 0.00 16099.04 14120.03 29074.15 00:11:58.277 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.277 Nvme1n1p2 : 1.03 7903.85 30.87 0.00 0.00 16067.58 13881.72 27525.12 00:11:58.277 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.277 Nvme2n1 : 1.03 7894.50 30.84 0.00 0.00 16012.62 14477.50 26810.18 00:11:58.277 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.277 Nvme2n2 : 1.03 7885.98 30.80 0.00 0.00 16003.51 13822.14 26214.40 00:11:58.277 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.277 Nvme2n3 : 1.03 7876.22 30.77 0.00 0.00 15972.47 12392.26 27405.96 00:11:58.277 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:58.277 Nvme3n1 : 1.03 7866.23 30.73 0.00 0.00 15943.36 10366.60 29193.31 00:11:58.277 [2024-11-06T07:48:20.906Z] =================================================================================================================== 00:11:58.277 [2024-11-06T07:48:20.906Z] Total : 55265.98 215.88 0.00 0.00 16029.82 10366.60 29193.31 00:11:59.215 00:11:59.215 real 0m3.379s 00:11:59.215 user 0m2.942s 00:11:59.215 sys 0m0.311s 00:11:59.215 ************************************ 00:11:59.215 END TEST bdev_write_zeroes 00:11:59.215 ************************************ 00:11:59.215 07:48:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.215 07:48:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:59.215 07:48:21 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.215 07:48:21 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:59.215 07:48:21 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.215 07:48:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:59.215 ************************************ 00:11:59.215 START TEST bdev_json_nonenclosed 00:11:59.215 ************************************ 00:11:59.215 07:48:21 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.215 [2024-11-06 07:48:21.831780] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:59.215 [2024-11-06 07:48:21.831953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63706 ] 00:11:59.474 [2024-11-06 07:48:22.007305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.732 [2024-11-06 07:48:22.136773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.732 [2024-11-06 07:48:22.136890] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:59.732 [2024-11-06 07:48:22.136920] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:59.732 [2024-11-06 07:48:22.136934] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:59.991 00:11:59.991 real 0m0.656s 00:11:59.991 user 0m0.422s 00:11:59.991 sys 0m0.128s 00:11:59.991 07:48:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.991 ************************************ 00:11:59.991 07:48:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:59.991 END TEST bdev_json_nonenclosed 00:11:59.991 ************************************ 00:11:59.991 07:48:22 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.991 07:48:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:11:59.991 07:48:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.991 07:48:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:59.991 ************************************ 00:11:59.991 START TEST bdev_json_nonarray 00:11:59.991 ************************************ 00:11:59.991 07:48:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:59.991 [2024-11-06 07:48:22.553615] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:11:59.991 [2024-11-06 07:48:22.553815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63726 ] 00:12:00.250 [2024-11-06 07:48:22.729867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.250 [2024-11-06 07:48:22.857640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.250 [2024-11-06 07:48:22.857797] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:00.250 [2024-11-06 07:48:22.857828] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:00.250 [2024-11-06 07:48:22.857842] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:00.509 00:12:00.509 real 0m0.667s 00:12:00.509 user 0m0.428s 00:12:00.509 sys 0m0.134s 00:12:00.509 ************************************ 00:12:00.509 END TEST bdev_json_nonarray 00:12:00.509 ************************************ 00:12:00.509 07:48:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:00.509 07:48:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:00.768 07:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:12:00.768 07:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:12:00.768 07:48:23 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:12:00.768 07:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:00.768 07:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:00.768 07:48:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:00.768 ************************************ 00:12:00.768 START TEST bdev_gpt_uuid 00:12:00.768 ************************************ 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63757 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63757 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 63757 ']' 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.768 07:48:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:00.768 [2024-11-06 07:48:23.317071] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:12:00.768 [2024-11-06 07:48:23.317314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63757 ] 00:12:01.027 [2024-11-06 07:48:23.507423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.027 [2024-11-06 07:48:23.637192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.963 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.963 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:12:01.963 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:01.963 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.963 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:02.530 Some configs were skipped because the RPC state that can call them passed over. 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:12:02.531 { 00:12:02.531 "name": "Nvme1n1p1", 00:12:02.531 "aliases": [ 00:12:02.531 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:12:02.531 ], 00:12:02.531 "product_name": "GPT Disk", 00:12:02.531 "block_size": 4096, 00:12:02.531 "num_blocks": 655104, 00:12:02.531 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:02.531 "assigned_rate_limits": { 00:12:02.531 "rw_ios_per_sec": 0, 00:12:02.531 "rw_mbytes_per_sec": 0, 00:12:02.531 "r_mbytes_per_sec": 0, 00:12:02.531 "w_mbytes_per_sec": 0 00:12:02.531 }, 00:12:02.531 "claimed": false, 00:12:02.531 "zoned": false, 00:12:02.531 "supported_io_types": { 00:12:02.531 "read": true, 00:12:02.531 "write": true, 00:12:02.531 "unmap": true, 00:12:02.531 "flush": true, 00:12:02.531 "reset": true, 00:12:02.531 "nvme_admin": false, 00:12:02.531 "nvme_io": false, 00:12:02.531 "nvme_io_md": false, 00:12:02.531 "write_zeroes": true, 00:12:02.531 "zcopy": false, 00:12:02.531 "get_zone_info": false, 00:12:02.531 "zone_management": false, 00:12:02.531 "zone_append": false, 00:12:02.531 "compare": true, 00:12:02.531 "compare_and_write": false, 00:12:02.531 "abort": true, 00:12:02.531 "seek_hole": false, 00:12:02.531 "seek_data": false, 00:12:02.531 "copy": true, 00:12:02.531 "nvme_iov_md": false 00:12:02.531 }, 00:12:02.531 "driver_specific": { 00:12:02.531 "gpt": { 00:12:02.531 "base_bdev": "Nvme1n1", 00:12:02.531 "offset_blocks": 256, 00:12:02.531 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:12:02.531 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:12:02.531 "partition_name": "SPDK_TEST_first" 00:12:02.531 } 00:12:02.531 } 00:12:02.531 } 00:12:02.531 ]' 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:02.531 07:48:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:12:02.531 { 00:12:02.531 "name": "Nvme1n1p2", 00:12:02.531 "aliases": [ 00:12:02.531 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:12:02.531 ], 00:12:02.531 "product_name": "GPT Disk", 00:12:02.531 "block_size": 4096, 00:12:02.531 "num_blocks": 655103, 00:12:02.531 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:02.531 "assigned_rate_limits": { 00:12:02.531 "rw_ios_per_sec": 0, 00:12:02.531 "rw_mbytes_per_sec": 0, 00:12:02.531 "r_mbytes_per_sec": 0, 00:12:02.531 "w_mbytes_per_sec": 0 00:12:02.531 }, 00:12:02.531 "claimed": false, 00:12:02.531 "zoned": false, 00:12:02.531 "supported_io_types": { 00:12:02.531 "read": true, 00:12:02.531 "write": true, 00:12:02.531 "unmap": true, 00:12:02.531 "flush": true, 00:12:02.531 "reset": true, 00:12:02.531 "nvme_admin": false, 00:12:02.531 "nvme_io": false, 00:12:02.531 "nvme_io_md": false, 00:12:02.531 "write_zeroes": true, 00:12:02.531 "zcopy": false, 00:12:02.531 "get_zone_info": false, 00:12:02.531 "zone_management": false, 00:12:02.531 "zone_append": false, 00:12:02.531 "compare": true, 00:12:02.531 "compare_and_write": false, 00:12:02.531 "abort": true, 00:12:02.531 "seek_hole": false, 00:12:02.531 "seek_data": false, 00:12:02.531 "copy": true, 00:12:02.531 "nvme_iov_md": false 00:12:02.531 }, 00:12:02.531 "driver_specific": { 00:12:02.531 "gpt": { 00:12:02.531 "base_bdev": "Nvme1n1", 00:12:02.531 "offset_blocks": 655360, 00:12:02.531 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:12:02.531 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:12:02.531 "partition_name": "SPDK_TEST_second" 00:12:02.531 } 00:12:02.531 } 00:12:02.531 } 00:12:02.531 ]' 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:12:02.531 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63757 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 63757 ']' 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 63757 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63757 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.790 killing process with pid 63757 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63757' 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 63757 00:12:02.790 07:48:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 63757 00:12:05.323 00:12:05.323 real 0m4.275s 00:12:05.323 user 0m4.548s 00:12:05.323 sys 0m0.593s 00:12:05.323 ************************************ 00:12:05.323 END TEST bdev_gpt_uuid 00:12:05.323 ************************************ 00:12:05.323 07:48:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.323 07:48:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:12:05.323 07:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:05.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:05.582 Waiting for block devices as requested 00:12:05.582 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.582 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.582 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.840 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:11.110 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:11.110 07:48:33 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:12:11.110 07:48:33 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:12:11.110 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:11.110 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:11.110 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:11.110 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:11.110 07:48:33 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:12:11.110 00:12:11.110 real 1m6.246s 00:12:11.110 user 1m25.000s 00:12:11.110 sys 0m10.811s 00:12:11.110 07:48:33 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.110 ************************************ 00:12:11.111 END TEST blockdev_nvme_gpt 00:12:11.111 07:48:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:12:11.111 ************************************ 00:12:11.111 07:48:33 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:11.111 07:48:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:11.111 07:48:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.111 07:48:33 -- common/autotest_common.sh@10 -- # set +x 00:12:11.111 ************************************ 00:12:11.111 START TEST nvme 00:12:11.111 ************************************ 00:12:11.111 07:48:33 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:11.369 * Looking for test storage... 00:12:11.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1689 -- # lcov --version 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:11.369 07:48:33 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.369 07:48:33 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.369 07:48:33 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.369 07:48:33 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.369 07:48:33 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.369 07:48:33 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.369 07:48:33 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.369 07:48:33 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.369 07:48:33 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.369 07:48:33 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.369 07:48:33 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.369 07:48:33 nvme -- scripts/common.sh@344 -- # case "$op" in 00:12:11.369 07:48:33 nvme -- scripts/common.sh@345 -- # : 1 00:12:11.369 07:48:33 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.369 07:48:33 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.369 07:48:33 nvme -- scripts/common.sh@365 -- # decimal 1 00:12:11.369 07:48:33 nvme -- scripts/common.sh@353 -- # local d=1 00:12:11.369 07:48:33 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.369 07:48:33 nvme -- scripts/common.sh@355 -- # echo 1 00:12:11.369 07:48:33 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.369 07:48:33 nvme -- scripts/common.sh@366 -- # decimal 2 00:12:11.369 07:48:33 nvme -- scripts/common.sh@353 -- # local d=2 00:12:11.369 07:48:33 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.369 07:48:33 nvme -- scripts/common.sh@355 -- # echo 2 00:12:11.369 07:48:33 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.369 07:48:33 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.369 07:48:33 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.369 07:48:33 nvme -- scripts/common.sh@368 -- # return 0 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:11.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.369 --rc genhtml_branch_coverage=1 00:12:11.369 --rc genhtml_function_coverage=1 00:12:11.369 --rc genhtml_legend=1 00:12:11.369 --rc geninfo_all_blocks=1 00:12:11.369 --rc geninfo_unexecuted_blocks=1 00:12:11.369 00:12:11.369 ' 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:11.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.369 --rc genhtml_branch_coverage=1 00:12:11.369 --rc genhtml_function_coverage=1 00:12:11.369 --rc genhtml_legend=1 00:12:11.369 --rc geninfo_all_blocks=1 00:12:11.369 --rc geninfo_unexecuted_blocks=1 00:12:11.369 00:12:11.369 ' 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:11.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.369 --rc genhtml_branch_coverage=1 00:12:11.369 --rc genhtml_function_coverage=1 00:12:11.369 --rc genhtml_legend=1 00:12:11.369 --rc geninfo_all_blocks=1 00:12:11.369 --rc geninfo_unexecuted_blocks=1 00:12:11.369 00:12:11.369 ' 00:12:11.369 07:48:33 nvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:11.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.369 --rc genhtml_branch_coverage=1 00:12:11.369 --rc genhtml_function_coverage=1 00:12:11.369 --rc genhtml_legend=1 00:12:11.369 --rc geninfo_all_blocks=1 00:12:11.369 --rc geninfo_unexecuted_blocks=1 00:12:11.369 00:12:11.369 ' 00:12:11.369 07:48:33 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:11.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.503 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.503 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.503 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.503 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.503 07:48:35 nvme -- nvme/nvme.sh@79 -- # uname 00:12:12.503 07:48:35 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:12:12.503 07:48:35 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:12:12.503 07:48:35 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1071 -- # stubpid=64408 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:12:12.503 Waiting for stub to ready for secondary processes... 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64408 ]] 00:12:12.503 07:48:35 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:12:12.761 [2024-11-06 07:48:35.164162] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:12:12.761 [2024-11-06 07:48:35.164377] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:12:13.721 07:48:36 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:13.721 07:48:36 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64408 ]] 00:12:13.721 07:48:36 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:12:13.979 [2024-11-06 07:48:36.547034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:14.238 [2024-11-06 07:48:36.706283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.238 [2024-11-06 07:48:36.706364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.238 [2024-11-06 07:48:36.706374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.238 [2024-11-06 07:48:36.728724] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:12:14.238 [2024-11-06 07:48:36.728794] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:14.238 [2024-11-06 07:48:36.741032] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:14.238 [2024-11-06 07:48:36.741143] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:14.238 [2024-11-06 07:48:36.743315] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:14.238 [2024-11-06 07:48:36.743579] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:12:14.238 [2024-11-06 07:48:36.743699] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:12:14.238 [2024-11-06 07:48:36.747302] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:14.238 [2024-11-06 07:48:36.747563] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:12:14.238 [2024-11-06 07:48:36.747671] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:12:14.238 [2024-11-06 07:48:36.750701] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:14.238 [2024-11-06 07:48:36.750958] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:12:14.238 [2024-11-06 07:48:36.751066] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:12:14.238 [2024-11-06 07:48:36.751137] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:12:14.238 [2024-11-06 07:48:36.751211] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:12:14.497 07:48:37 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:12:14.497 done. 00:12:14.497 07:48:37 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:12:14.497 07:48:37 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:14.497 07:48:37 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:12:14.497 07:48:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.497 07:48:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.755 ************************************ 00:12:14.755 START TEST nvme_reset 00:12:14.755 ************************************ 00:12:14.755 07:48:37 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:15.014 Initializing NVMe Controllers 00:12:15.014 Skipping QEMU NVMe SSD at 0000:00:10.0 00:12:15.014 Skipping QEMU NVMe SSD at 0000:00:11.0 00:12:15.014 Skipping QEMU NVMe SSD at 0000:00:13.0 00:12:15.014 Skipping QEMU NVMe SSD at 0000:00:12.0 00:12:15.014 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:15.014 00:12:15.014 real 0m0.410s 00:12:15.014 user 0m0.178s 00:12:15.014 sys 0m0.184s 00:12:15.014 07:48:37 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.014 07:48:37 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:12:15.014 ************************************ 00:12:15.014 END TEST nvme_reset 00:12:15.014 ************************************ 00:12:15.014 07:48:37 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:15.014 07:48:37 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:15.014 07:48:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.014 07:48:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.014 ************************************ 00:12:15.014 START TEST nvme_identify 00:12:15.014 ************************************ 00:12:15.014 07:48:37 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:12:15.014 07:48:37 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:12:15.014 07:48:37 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:15.014 07:48:37 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:15.014 07:48:37 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:15.014 07:48:37 nvme.nvme_identify -- common/autotest_common.sh@1494 -- # bdfs=() 00:12:15.014 07:48:37 nvme.nvme_identify -- common/autotest_common.sh@1494 -- # local bdfs 00:12:15.014 07:48:37 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:15.014 07:48:37 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:15.014 07:48:37 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:12:15.272 07:48:37 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:12:15.272 07:48:37 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:15.272 07:48:37 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:15.535 [2024-11-06 07:48:37.967390] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64442 terminated unexpected 00:12:15.535 ===================================================== 00:12:15.535 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:15.535 ===================================================== 00:12:15.535 Controller Capabilities/Features 00:12:15.535 ================================ 00:12:15.535 Vendor ID: 1b36 00:12:15.535 Subsystem Vendor ID: 1af4 00:12:15.535 Serial Number: 12340 00:12:15.535 Model Number: QEMU NVMe Ctrl 00:12:15.535 Firmware Version: 8.0.0 00:12:15.535 Recommended Arb Burst: 6 00:12:15.535 IEEE OUI Identifier: 00 54 52 00:12:15.535 Multi-path I/O 00:12:15.535 May have multiple subsystem ports: No 00:12:15.535 May have multiple controllers: No 00:12:15.535 Associated with SR-IOV VF: No 00:12:15.535 Max Data Transfer Size: 524288 00:12:15.535 Max Number of Namespaces: 256 00:12:15.535 Max Number of I/O Queues: 64 00:12:15.535 NVMe Specification Version (VS): 1.4 00:12:15.535 NVMe Specification Version (Identify): 1.4 00:12:15.535 Maximum Queue Entries: 2048 00:12:15.535 Contiguous Queues Required: Yes 00:12:15.535 Arbitration Mechanisms Supported 00:12:15.535 Weighted Round Robin: Not Supported 00:12:15.535 Vendor Specific: Not Supported 00:12:15.535 Reset Timeout: 7500 ms 00:12:15.535 Doorbell Stride: 4 bytes 00:12:15.535 NVM Subsystem Reset: Not Supported 00:12:15.535 Command Sets Supported 00:12:15.535 NVM Command Set: Supported 00:12:15.535 Boot Partition: Not Supported 00:12:15.535 Memory Page Size Minimum: 4096 bytes 00:12:15.535 Memory Page Size Maximum: 65536 bytes 00:12:15.535 Persistent Memory Region: Not Supported 00:12:15.535 Optional Asynchronous Events Supported 00:12:15.535 Namespace Attribute Notices: Supported 00:12:15.535 Firmware Activation Notices: Not Supported 00:12:15.535 ANA Change Notices: Not Supported 00:12:15.535 PLE Aggregate Log Change Notices: Not Supported 00:12:15.535 LBA Status Info Alert Notices: Not Supported 00:12:15.535 EGE Aggregate Log Change Notices: Not Supported 00:12:15.535 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.535 Zone Descriptor Change Notices: Not Supported 00:12:15.535 Discovery Log Change Notices: Not Supported 00:12:15.535 Controller Attributes 00:12:15.535 128-bit Host Identifier: Not Supported 00:12:15.535 Non-Operational Permissive Mode: Not Supported 00:12:15.535 NVM Sets: Not Supported 00:12:15.535 Read Recovery Levels: Not Supported 00:12:15.535 Endurance Groups: Not Supported 00:12:15.535 Predictable Latency Mode: Not Supported 00:12:15.535 Traffic Based Keep ALive: Not Supported 00:12:15.535 Namespace Granularity: Not Supported 00:12:15.535 SQ Associations: Not Supported 00:12:15.535 UUID List: Not Supported 00:12:15.535 Multi-Domain Subsystem: Not Supported 00:12:15.535 Fixed Capacity Management: Not Supported 00:12:15.535 Variable Capacity Management: Not Supported 00:12:15.535 Delete Endurance Group: Not Supported 00:12:15.535 Delete NVM Set: Not Supported 00:12:15.535 Extended LBA Formats Supported: Supported 00:12:15.535 Flexible Data Placement Supported: Not Supported 00:12:15.535 00:12:15.535 Controller Memory Buffer Support 00:12:15.535 ================================ 00:12:15.535 Supported: No 00:12:15.535 00:12:15.535 Persistent Memory Region Support 00:12:15.535 ================================ 00:12:15.535 Supported: No 00:12:15.535 00:12:15.535 Admin Command Set Attributes 00:12:15.535 ============================ 00:12:15.535 Security Send/Receive: Not Supported 00:12:15.535 Format NVM: Supported 00:12:15.535 Firmware Activate/Download: Not Supported 00:12:15.535 Namespace Management: Supported 00:12:15.535 Device Self-Test: Not Supported 00:12:15.535 Directives: Supported 00:12:15.535 NVMe-MI: Not Supported 00:12:15.535 Virtualization Management: Not Supported 00:12:15.535 Doorbell Buffer Config: Supported 00:12:15.535 Get LBA Status Capability: Not Supported 00:12:15.535 Command & Feature Lockdown Capability: Not Supported 00:12:15.535 Abort Command Limit: 4 00:12:15.535 Async Event Request Limit: 4 00:12:15.535 Number of Firmware Slots: N/A 00:12:15.535 Firmware Slot 1 Read-Only: N/A 00:12:15.535 Firmware Activation Without Reset: N/A 00:12:15.535 Multiple Update Detection Support: N/A 00:12:15.535 Firmware Update Granularity: No Information Provided 00:12:15.535 Per-Namespace SMART Log: Yes 00:12:15.535 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.535 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:15.535 Command Effects Log Page: Supported 00:12:15.535 Get Log Page Extended Data: Supported 00:12:15.535 Telemetry Log Pages: Not Supported 00:12:15.535 Persistent Event Log Pages: Not Supported 00:12:15.535 Supported Log Pages Log Page: May Support 00:12:15.535 Commands Supported & Effects Log Page: Not Supported 00:12:15.535 Feature Identifiers & Effects Log Page:May Support 00:12:15.535 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.535 Data Area 4 for Telemetry Log: Not Supported 00:12:15.535 Error Log Page Entries Supported: 1 00:12:15.535 Keep Alive: Not Supported 00:12:15.535 00:12:15.535 NVM Command Set Attributes 00:12:15.535 ========================== 00:12:15.535 Submission Queue Entry Size 00:12:15.535 Max: 64 00:12:15.535 Min: 64 00:12:15.535 Completion Queue Entry Size 00:12:15.535 Max: 16 00:12:15.535 Min: 16 00:12:15.535 Number of Namespaces: 256 00:12:15.535 Compare Command: Supported 00:12:15.535 Write Uncorrectable Command: Not Supported 00:12:15.535 Dataset Management Command: Supported 00:12:15.535 Write Zeroes Command: Supported 00:12:15.535 Set Features Save Field: Supported 00:12:15.535 Reservations: Not Supported 00:12:15.535 Timestamp: Supported 00:12:15.535 Copy: Supported 00:12:15.535 Volatile Write Cache: Present 00:12:15.535 Atomic Write Unit (Normal): 1 00:12:15.535 Atomic Write Unit (PFail): 1 00:12:15.535 Atomic Compare & Write Unit: 1 00:12:15.535 Fused Compare & Write: Not Supported 00:12:15.535 Scatter-Gather List 00:12:15.535 SGL Command Set: Supported 00:12:15.535 SGL Keyed: Not Supported 00:12:15.535 SGL Bit Bucket Descriptor: Not Supported 00:12:15.535 SGL Metadata Pointer: Not Supported 00:12:15.535 Oversized SGL: Not Supported 00:12:15.535 SGL Metadata Address: Not Supported 00:12:15.535 SGL Offset: Not Supported 00:12:15.535 Transport SGL Data Block: Not Supported 00:12:15.535 Replay Protected Memory Block: Not Supported 00:12:15.535 00:12:15.535 Firmware Slot Information 00:12:15.535 ========================= 00:12:15.535 Active slot: 1 00:12:15.535 Slot 1 Firmware Revision: 1.0 00:12:15.535 00:12:15.535 00:12:15.535 Commands Supported and Effects 00:12:15.535 ============================== 00:12:15.535 Admin Commands 00:12:15.535 -------------- 00:12:15.535 Delete I/O Submission Queue (00h): Supported 00:12:15.535 Create I/O Submission Queue (01h): Supported 00:12:15.535 Get Log Page (02h): Supported 00:12:15.535 Delete I/O Completion Queue (04h): Supported 00:12:15.535 Create I/O Completion Queue (05h): Supported 00:12:15.535 Identify (06h): Supported 00:12:15.535 Abort (08h): Supported 00:12:15.535 Set Features (09h): Supported 00:12:15.535 Get Features (0Ah): Supported 00:12:15.535 Asynchronous Event Request (0Ch): Supported 00:12:15.536 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.536 Directive Send (19h): Supported 00:12:15.536 Directive Receive (1Ah): Supported 00:12:15.536 Virtualization Management (1Ch): Supported 00:12:15.536 Doorbell Buffer Config (7Ch): Supported 00:12:15.536 Format NVM (80h): Supported LBA-Change 00:12:15.536 I/O Commands 00:12:15.536 ------------ 00:12:15.536 Flush (00h): Supported LBA-Change 00:12:15.536 Write (01h): Supported LBA-Change 00:12:15.536 Read (02h): Supported 00:12:15.536 Compare (05h): Supported 00:12:15.536 Write Zeroes (08h): Supported LBA-Change 00:12:15.536 Dataset Management (09h): Supported LBA-Change 00:12:15.536 Unknown (0Ch): Supported 00:12:15.536 Unknown (12h): Supported 00:12:15.536 Copy (19h): Supported LBA-Change 00:12:15.536 Unknown (1Dh): Supported LBA-Change 00:12:15.536 00:12:15.536 Error Log 00:12:15.536 ========= 00:12:15.536 00:12:15.536 Arbitration 00:12:15.536 =========== 00:12:15.536 Arbitration Burst: no limit 00:12:15.536 00:12:15.536 Power Management 00:12:15.536 ================ 00:12:15.536 Number of Power States: 1 00:12:15.536 Current Power State: Power State #0 00:12:15.536 Power State #0: 00:12:15.536 Max Power: 25.00 W 00:12:15.536 Non-Operational State: Operational 00:12:15.536 Entry Latency: 16 microseconds 00:12:15.536 Exit Latency: 4 microseconds 00:12:15.536 Relative Read Throughput: 0 00:12:15.536 Relative Read Latency: 0 00:12:15.536 Relative Write Throughput: 0 00:12:15.536 Relative Write Latency: 0 00:12:15.536 Idle Power[2024-11-06 07:48:37.968931] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64442 terminated unexpected 00:12:15.536 : Not Reported 00:12:15.536 Active Power: Not Reported 00:12:15.536 Non-Operational Permissive Mode: Not Supported 00:12:15.536 00:12:15.536 Health Information 00:12:15.536 ================== 00:12:15.536 Critical Warnings: 00:12:15.536 Available Spare Space: OK 00:12:15.536 Temperature: OK 00:12:15.536 Device Reliability: OK 00:12:15.536 Read Only: No 00:12:15.536 Volatile Memory Backup: OK 00:12:15.536 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.536 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.536 Available Spare: 0% 00:12:15.536 Available Spare Threshold: 0% 00:12:15.536 Life Percentage Used: 0% 00:12:15.536 Data Units Read: 676 00:12:15.536 Data Units Written: 604 00:12:15.536 Host Read Commands: 31701 00:12:15.536 Host Write Commands: 31487 00:12:15.536 Controller Busy Time: 0 minutes 00:12:15.536 Power Cycles: 0 00:12:15.536 Power On Hours: 0 hours 00:12:15.536 Unsafe Shutdowns: 0 00:12:15.536 Unrecoverable Media Errors: 0 00:12:15.536 Lifetime Error Log Entries: 0 00:12:15.536 Warning Temperature Time: 0 minutes 00:12:15.536 Critical Temperature Time: 0 minutes 00:12:15.536 00:12:15.536 Number of Queues 00:12:15.536 ================ 00:12:15.536 Number of I/O Submission Queues: 64 00:12:15.536 Number of I/O Completion Queues: 64 00:12:15.536 00:12:15.536 ZNS Specific Controller Data 00:12:15.536 ============================ 00:12:15.536 Zone Append Size Limit: 0 00:12:15.536 00:12:15.536 00:12:15.536 Active Namespaces 00:12:15.536 ================= 00:12:15.536 Namespace ID:1 00:12:15.536 Error Recovery Timeout: Unlimited 00:12:15.536 Command Set Identifier: NVM (00h) 00:12:15.536 Deallocate: Supported 00:12:15.536 Deallocated/Unwritten Error: Supported 00:12:15.536 Deallocated Read Value: All 0x00 00:12:15.536 Deallocate in Write Zeroes: Not Supported 00:12:15.536 Deallocated Guard Field: 0xFFFF 00:12:15.536 Flush: Supported 00:12:15.536 Reservation: Not Supported 00:12:15.536 Metadata Transferred as: Separate Metadata Buffer 00:12:15.536 Namespace Sharing Capabilities: Private 00:12:15.536 Size (in LBAs): 1548666 (5GiB) 00:12:15.536 Capacity (in LBAs): 1548666 (5GiB) 00:12:15.536 Utilization (in LBAs): 1548666 (5GiB) 00:12:15.536 Thin Provisioning: Not Supported 00:12:15.536 Per-NS Atomic Units: No 00:12:15.536 Maximum Single Source Range Length: 128 00:12:15.536 Maximum Copy Length: 128 00:12:15.536 Maximum Source Range Count: 128 00:12:15.536 NGUID/EUI64 Never Reused: No 00:12:15.536 Namespace Write Protected: No 00:12:15.536 Number of LBA Formats: 8 00:12:15.536 Current LBA Format: LBA Format #07 00:12:15.536 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.536 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.536 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.536 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.536 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.536 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.536 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.536 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.536 00:12:15.536 NVM Specific Namespace Data 00:12:15.536 =========================== 00:12:15.536 Logical Block Storage Tag Mask: 0 00:12:15.536 Protection Information Capabilities: 00:12:15.536 16b Guard Protection Information Storage Tag Support: No 00:12:15.536 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.536 Storage Tag Check Read Support: No 00:12:15.536 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.536 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.536 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.536 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.536 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.536 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.536 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.536 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.536 ===================================================== 00:12:15.536 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:15.536 ===================================================== 00:12:15.536 Controller Capabilities/Features 00:12:15.536 ================================ 00:12:15.536 Vendor ID: 1b36 00:12:15.536 Subsystem Vendor ID: 1af4 00:12:15.536 Serial Number: 12341 00:12:15.536 Model Number: QEMU NVMe Ctrl 00:12:15.536 Firmware Version: 8.0.0 00:12:15.536 Recommended Arb Burst: 6 00:12:15.536 IEEE OUI Identifier: 00 54 52 00:12:15.536 Multi-path I/O 00:12:15.536 May have multiple subsystem ports: No 00:12:15.536 May have multiple controllers: No 00:12:15.536 Associated with SR-IOV VF: No 00:12:15.536 Max Data Transfer Size: 524288 00:12:15.536 Max Number of Namespaces: 256 00:12:15.536 Max Number of I/O Queues: 64 00:12:15.536 NVMe Specification Version (VS): 1.4 00:12:15.536 NVMe Specification Version (Identify): 1.4 00:12:15.536 Maximum Queue Entries: 2048 00:12:15.536 Contiguous Queues Required: Yes 00:12:15.536 Arbitration Mechanisms Supported 00:12:15.536 Weighted Round Robin: Not Supported 00:12:15.536 Vendor Specific: Not Supported 00:12:15.536 Reset Timeout: 7500 ms 00:12:15.536 Doorbell Stride: 4 bytes 00:12:15.536 NVM Subsystem Reset: Not Supported 00:12:15.536 Command Sets Supported 00:12:15.536 NVM Command Set: Supported 00:12:15.536 Boot Partition: Not Supported 00:12:15.536 Memory Page Size Minimum: 4096 bytes 00:12:15.536 Memory Page Size Maximum: 65536 bytes 00:12:15.536 Persistent Memory Region: Not Supported 00:12:15.536 Optional Asynchronous Events Supported 00:12:15.536 Namespace Attribute Notices: Supported 00:12:15.536 Firmware Activation Notices: Not Supported 00:12:15.536 ANA Change Notices: Not Supported 00:12:15.536 PLE Aggregate Log Change Notices: Not Supported 00:12:15.536 LBA Status Info Alert Notices: Not Supported 00:12:15.536 EGE Aggregate Log Change Notices: Not Supported 00:12:15.536 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.536 Zone Descriptor Change Notices: Not Supported 00:12:15.536 Discovery Log Change Notices: Not Supported 00:12:15.536 Controller Attributes 00:12:15.536 128-bit Host Identifier: Not Supported 00:12:15.536 Non-Operational Permissive Mode: Not Supported 00:12:15.536 NVM Sets: Not Supported 00:12:15.536 Read Recovery Levels: Not Supported 00:12:15.536 Endurance Groups: Not Supported 00:12:15.536 Predictable Latency Mode: Not Supported 00:12:15.536 Traffic Based Keep ALive: Not Supported 00:12:15.536 Namespace Granularity: Not Supported 00:12:15.536 SQ Associations: Not Supported 00:12:15.537 UUID List: Not Supported 00:12:15.537 Multi-Domain Subsystem: Not Supported 00:12:15.537 Fixed Capacity Management: Not Supported 00:12:15.537 Variable Capacity Management: Not Supported 00:12:15.537 Delete Endurance Group: Not Supported 00:12:15.537 Delete NVM Set: Not Supported 00:12:15.537 Extended LBA Formats Supported: Supported 00:12:15.537 Flexible Data Placement Supported: Not Supported 00:12:15.537 00:12:15.537 Controller Memory Buffer Support 00:12:15.537 ================================ 00:12:15.537 Supported: No 00:12:15.537 00:12:15.537 Persistent Memory Region Support 00:12:15.537 ================================ 00:12:15.537 Supported: No 00:12:15.537 00:12:15.537 Admin Command Set Attributes 00:12:15.537 ============================ 00:12:15.537 Security Send/Receive: Not Supported 00:12:15.537 Format NVM: Supported 00:12:15.537 Firmware Activate/Download: Not Supported 00:12:15.537 Namespace Management: Supported 00:12:15.537 Device Self-Test: Not Supported 00:12:15.537 Directives: Supported 00:12:15.537 NVMe-MI: Not Supported 00:12:15.537 Virtualization Management: Not Supported 00:12:15.537 Doorbell Buffer Config: Supported 00:12:15.537 Get LBA Status Capability: Not Supported 00:12:15.537 Command & Feature Lockdown Capability: Not Supported 00:12:15.537 Abort Command Limit: 4 00:12:15.537 Async Event Request Limit: 4 00:12:15.537 Number of Firmware Slots: N/A 00:12:15.537 Firmware Slot 1 Read-Only: N/A 00:12:15.537 Firmware Activation Without Reset: N/A 00:12:15.537 Multiple Update Detection Support: N/A 00:12:15.537 Firmware Update Granularity: No Information Provided 00:12:15.537 Per-Namespace SMART Log: Yes 00:12:15.537 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.537 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:15.537 Command Effects Log Page: Supported 00:12:15.537 Get Log Page Extended Data: Supported 00:12:15.537 Telemetry Log Pages: Not Supported 00:12:15.537 Persistent Event Log Pages: Not Supported 00:12:15.537 Supported Log Pages Log Page: May Support 00:12:15.537 Commands Supported & Effects Log Page: Not Supported 00:12:15.537 Feature Identifiers & Effects Log Page:May Support 00:12:15.537 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.537 Data Area 4 for Telemetry Log: Not Supported 00:12:15.537 Error Log Page Entries Supported: 1 00:12:15.537 Keep Alive: Not Supported 00:12:15.537 00:12:15.537 NVM Command Set Attributes 00:12:15.537 ========================== 00:12:15.537 Submission Queue Entry Size 00:12:15.537 Max: 64 00:12:15.537 Min: 64 00:12:15.537 Completion Queue Entry Size 00:12:15.537 Max: 16 00:12:15.537 Min: 16 00:12:15.537 Number of Namespaces: 256 00:12:15.537 Compare Command: Supported 00:12:15.537 Write Uncorrectable Command: Not Supported 00:12:15.537 Dataset Management Command: Supported 00:12:15.537 Write Zeroes Command: Supported 00:12:15.537 Set Features Save Field: Supported 00:12:15.537 Reservations: Not Supported 00:12:15.537 Timestamp: Supported 00:12:15.537 Copy: Supported 00:12:15.537 Volatile Write Cache: Present 00:12:15.537 Atomic Write Unit (Normal): 1 00:12:15.537 Atomic Write Unit (PFail): 1 00:12:15.537 Atomic Compare & Write Unit: 1 00:12:15.537 Fused Compare & Write: Not Supported 00:12:15.537 Scatter-Gather List 00:12:15.537 SGL Command Set: Supported 00:12:15.537 SGL Keyed: Not Supported 00:12:15.537 SGL Bit Bucket Descriptor: Not Supported 00:12:15.537 SGL Metadata Pointer: Not Supported 00:12:15.537 Oversized SGL: Not Supported 00:12:15.537 SGL Metadata Address: Not Supported 00:12:15.537 SGL Offset: Not Supported 00:12:15.537 Transport SGL Data Block: Not Supported 00:12:15.537 Replay Protected Memory Block: Not Supported 00:12:15.537 00:12:15.537 Firmware Slot Information 00:12:15.537 ========================= 00:12:15.537 Active slot: 1 00:12:15.537 Slot 1 Firmware Revision: 1.0 00:12:15.537 00:12:15.537 00:12:15.537 Commands Supported and Effects 00:12:15.537 ============================== 00:12:15.537 Admin Commands 00:12:15.537 -------------- 00:12:15.537 Delete I/O Submission Queue (00h): Supported 00:12:15.537 Create I/O Submission Queue (01h): Supported 00:12:15.537 Get Log Page (02h): Supported 00:12:15.537 Delete I/O Completion Queue (04h): Supported 00:12:15.537 Create I/O Completion Queue (05h): Supported 00:12:15.537 Identify (06h): Supported 00:12:15.537 Abort (08h): Supported 00:12:15.537 Set Features (09h): Supported 00:12:15.537 Get Features (0Ah): Supported 00:12:15.537 Asynchronous Event Request (0Ch): Supported 00:12:15.537 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.537 Directive Send (19h): Supported 00:12:15.537 Directive Receive (1Ah): Supported 00:12:15.537 Virtualization Management (1Ch): Supported 00:12:15.537 Doorbell Buffer Config (7Ch): Supported 00:12:15.537 Format NVM (80h): Supported LBA-Change 00:12:15.537 I/O Commands 00:12:15.537 ------------ 00:12:15.537 Flush (00h): Supported LBA-Change 00:12:15.537 Write (01h): Supported LBA-Change 00:12:15.537 Read (02h): Supported 00:12:15.537 Compare (05h): Supported 00:12:15.537 Write Zeroes (08h): Supported LBA-Change 00:12:15.537 Dataset Management (09h): Supported LBA-Change 00:12:15.537 Unknown (0Ch): Supported 00:12:15.537 Unknown (12h): Supported 00:12:15.537 Copy (19h): Supported LBA-Change 00:12:15.537 Unknown (1Dh): Supported LBA-Change 00:12:15.537 00:12:15.537 Error Log 00:12:15.537 ========= 00:12:15.537 00:12:15.537 Arbitration 00:12:15.537 =========== 00:12:15.537 Arbitration Burst: no limit 00:12:15.537 00:12:15.537 Power Management 00:12:15.537 ================ 00:12:15.537 Number of Power States: 1 00:12:15.537 Current Power State: Power State #0 00:12:15.537 Power State #0: 00:12:15.537 Max Power: 25.00 W 00:12:15.537 Non-Operational State: Operational 00:12:15.537 Entry Latency: 16 microseconds 00:12:15.537 Exit Latency: 4 microseconds 00:12:15.537 Relative Read Throughput: 0 00:12:15.537 Relative Read Latency: 0 00:12:15.537 Relative Write Throughput: 0 00:12:15.537 Relative Write Latency: 0 00:12:15.537 Idle Power: Not Reported 00:12:15.537 Active Power: Not Reported 00:12:15.537 Non-Operational Permissive Mode: Not Supported 00:12:15.537 00:12:15.537 Health Information 00:12:15.537 ================== 00:12:15.537 Critical Warnings: 00:12:15.537 Available Spare Space: OK 00:12:15.537 Temperature: [2024-11-06 07:48:37.970017] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64442 terminated unexpected 00:12:15.537 OK 00:12:15.537 Device Reliability: OK 00:12:15.537 Read Only: No 00:12:15.537 Volatile Memory Backup: OK 00:12:15.537 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.537 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.537 Available Spare: 0% 00:12:15.537 Available Spare Threshold: 0% 00:12:15.537 Life Percentage Used: 0% 00:12:15.537 Data Units Read: 1053 00:12:15.537 Data Units Written: 920 00:12:15.537 Host Read Commands: 46564 00:12:15.537 Host Write Commands: 45356 00:12:15.537 Controller Busy Time: 0 minutes 00:12:15.537 Power Cycles: 0 00:12:15.537 Power On Hours: 0 hours 00:12:15.537 Unsafe Shutdowns: 0 00:12:15.537 Unrecoverable Media Errors: 0 00:12:15.537 Lifetime Error Log Entries: 0 00:12:15.537 Warning Temperature Time: 0 minutes 00:12:15.537 Critical Temperature Time: 0 minutes 00:12:15.537 00:12:15.537 Number of Queues 00:12:15.537 ================ 00:12:15.537 Number of I/O Submission Queues: 64 00:12:15.537 Number of I/O Completion Queues: 64 00:12:15.537 00:12:15.537 ZNS Specific Controller Data 00:12:15.537 ============================ 00:12:15.537 Zone Append Size Limit: 0 00:12:15.537 00:12:15.537 00:12:15.537 Active Namespaces 00:12:15.537 ================= 00:12:15.537 Namespace ID:1 00:12:15.537 Error Recovery Timeout: Unlimited 00:12:15.537 Command Set Identifier: NVM (00h) 00:12:15.537 Deallocate: Supported 00:12:15.537 Deallocated/Unwritten Error: Supported 00:12:15.537 Deallocated Read Value: All 0x00 00:12:15.537 Deallocate in Write Zeroes: Not Supported 00:12:15.537 Deallocated Guard Field: 0xFFFF 00:12:15.537 Flush: Supported 00:12:15.537 Reservation: Not Supported 00:12:15.537 Namespace Sharing Capabilities: Private 00:12:15.537 Size (in LBAs): 1310720 (5GiB) 00:12:15.537 Capacity (in LBAs): 1310720 (5GiB) 00:12:15.537 Utilization (in LBAs): 1310720 (5GiB) 00:12:15.537 Thin Provisioning: Not Supported 00:12:15.538 Per-NS Atomic Units: No 00:12:15.538 Maximum Single Source Range Length: 128 00:12:15.538 Maximum Copy Length: 128 00:12:15.538 Maximum Source Range Count: 128 00:12:15.538 NGUID/EUI64 Never Reused: No 00:12:15.538 Namespace Write Protected: No 00:12:15.538 Number of LBA Formats: 8 00:12:15.538 Current LBA Format: LBA Format #04 00:12:15.538 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.538 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.538 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.538 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.538 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.538 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.538 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.538 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.538 00:12:15.538 NVM Specific Namespace Data 00:12:15.538 =========================== 00:12:15.538 Logical Block Storage Tag Mask: 0 00:12:15.538 Protection Information Capabilities: 00:12:15.538 16b Guard Protection Information Storage Tag Support: No 00:12:15.538 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.538 Storage Tag Check Read Support: No 00:12:15.538 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.538 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.538 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.538 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.538 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.538 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.538 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.538 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.538 ===================================================== 00:12:15.538 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:15.538 ===================================================== 00:12:15.538 Controller Capabilities/Features 00:12:15.538 ================================ 00:12:15.538 Vendor ID: 1b36 00:12:15.538 Subsystem Vendor ID: 1af4 00:12:15.538 Serial Number: 12343 00:12:15.538 Model Number: QEMU NVMe Ctrl 00:12:15.538 Firmware Version: 8.0.0 00:12:15.538 Recommended Arb Burst: 6 00:12:15.538 IEEE OUI Identifier: 00 54 52 00:12:15.538 Multi-path I/O 00:12:15.538 May have multiple subsystem ports: No 00:12:15.538 May have multiple controllers: Yes 00:12:15.538 Associated with SR-IOV VF: No 00:12:15.538 Max Data Transfer Size: 524288 00:12:15.538 Max Number of Namespaces: 256 00:12:15.538 Max Number of I/O Queues: 64 00:12:15.538 NVMe Specification Version (VS): 1.4 00:12:15.538 NVMe Specification Version (Identify): 1.4 00:12:15.538 Maximum Queue Entries: 2048 00:12:15.538 Contiguous Queues Required: Yes 00:12:15.538 Arbitration Mechanisms Supported 00:12:15.538 Weighted Round Robin: Not Supported 00:12:15.538 Vendor Specific: Not Supported 00:12:15.538 Reset Timeout: 7500 ms 00:12:15.538 Doorbell Stride: 4 bytes 00:12:15.538 NVM Subsystem Reset: Not Supported 00:12:15.538 Command Sets Supported 00:12:15.538 NVM Command Set: Supported 00:12:15.538 Boot Partition: Not Supported 00:12:15.538 Memory Page Size Minimum: 4096 bytes 00:12:15.538 Memory Page Size Maximum: 65536 bytes 00:12:15.538 Persistent Memory Region: Not Supported 00:12:15.538 Optional Asynchronous Events Supported 00:12:15.538 Namespace Attribute Notices: Supported 00:12:15.538 Firmware Activation Notices: Not Supported 00:12:15.538 ANA Change Notices: Not Supported 00:12:15.538 PLE Aggregate Log Change Notices: Not Supported 00:12:15.538 LBA Status Info Alert Notices: Not Supported 00:12:15.538 EGE Aggregate Log Change Notices: Not Supported 00:12:15.538 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.538 Zone Descriptor Change Notices: Not Supported 00:12:15.538 Discovery Log Change Notices: Not Supported 00:12:15.538 Controller Attributes 00:12:15.538 128-bit Host Identifier: Not Supported 00:12:15.538 Non-Operational Permissive Mode: Not Supported 00:12:15.538 NVM Sets: Not Supported 00:12:15.538 Read Recovery Levels: Not Supported 00:12:15.538 Endurance Groups: Supported 00:12:15.538 Predictable Latency Mode: Not Supported 00:12:15.538 Traffic Based Keep ALive: Not Supported 00:12:15.538 Namespace Granularity: Not Supported 00:12:15.538 SQ Associations: Not Supported 00:12:15.538 UUID List: Not Supported 00:12:15.538 Multi-Domain Subsystem: Not Supported 00:12:15.538 Fixed Capacity Management: Not Supported 00:12:15.538 Variable Capacity Management: Not Supported 00:12:15.538 Delete Endurance Group: Not Supported 00:12:15.538 Delete NVM Set: Not Supported 00:12:15.538 Extended LBA Formats Supported: Supported 00:12:15.538 Flexible Data Placement Supported: Supported 00:12:15.538 00:12:15.538 Controller Memory Buffer Support 00:12:15.538 ================================ 00:12:15.538 Supported: No 00:12:15.538 00:12:15.538 Persistent Memory Region Support 00:12:15.538 ================================ 00:12:15.538 Supported: No 00:12:15.538 00:12:15.538 Admin Command Set Attributes 00:12:15.538 ============================ 00:12:15.538 Security Send/Receive: Not Supported 00:12:15.538 Format NVM: Supported 00:12:15.538 Firmware Activate/Download: Not Supported 00:12:15.538 Namespace Management: Supported 00:12:15.538 Device Self-Test: Not Supported 00:12:15.538 Directives: Supported 00:12:15.538 NVMe-MI: Not Supported 00:12:15.538 Virtualization Management: Not Supported 00:12:15.538 Doorbell Buffer Config: Supported 00:12:15.538 Get LBA Status Capability: Not Supported 00:12:15.538 Command & Feature Lockdown Capability: Not Supported 00:12:15.538 Abort Command Limit: 4 00:12:15.538 Async Event Request Limit: 4 00:12:15.538 Number of Firmware Slots: N/A 00:12:15.538 Firmware Slot 1 Read-Only: N/A 00:12:15.538 Firmware Activation Without Reset: N/A 00:12:15.538 Multiple Update Detection Support: N/A 00:12:15.538 Firmware Update Granularity: No Information Provided 00:12:15.538 Per-Namespace SMART Log: Yes 00:12:15.538 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.538 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:15.538 Command Effects Log Page: Supported 00:12:15.538 Get Log Page Extended Data: Supported 00:12:15.538 Telemetry Log Pages: Not Supported 00:12:15.538 Persistent Event Log Pages: Not Supported 00:12:15.538 Supported Log Pages Log Page: May Support 00:12:15.538 Commands Supported & Effects Log Page: Not Supported 00:12:15.538 Feature Identifiers & Effects Log Page:May Support 00:12:15.538 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.538 Data Area 4 for Telemetry Log: Not Supported 00:12:15.538 Error Log Page Entries Supported: 1 00:12:15.538 Keep Alive: Not Supported 00:12:15.538 00:12:15.538 NVM Command Set Attributes 00:12:15.538 ========================== 00:12:15.538 Submission Queue Entry Size 00:12:15.538 Max: 64 00:12:15.538 Min: 64 00:12:15.538 Completion Queue Entry Size 00:12:15.538 Max: 16 00:12:15.538 Min: 16 00:12:15.538 Number of Namespaces: 256 00:12:15.538 Compare Command: Supported 00:12:15.538 Write Uncorrectable Command: Not Supported 00:12:15.538 Dataset Management Command: Supported 00:12:15.538 Write Zeroes Command: Supported 00:12:15.538 Set Features Save Field: Supported 00:12:15.538 Reservations: Not Supported 00:12:15.538 Timestamp: Supported 00:12:15.538 Copy: Supported 00:12:15.538 Volatile Write Cache: Present 00:12:15.538 Atomic Write Unit (Normal): 1 00:12:15.538 Atomic Write Unit (PFail): 1 00:12:15.538 Atomic Compare & Write Unit: 1 00:12:15.538 Fused Compare & Write: Not Supported 00:12:15.538 Scatter-Gather List 00:12:15.538 SGL Command Set: Supported 00:12:15.538 SGL Keyed: Not Supported 00:12:15.538 SGL Bit Bucket Descriptor: Not Supported 00:12:15.538 SGL Metadata Pointer: Not Supported 00:12:15.538 Oversized SGL: Not Supported 00:12:15.538 SGL Metadata Address: Not Supported 00:12:15.538 SGL Offset: Not Supported 00:12:15.538 Transport SGL Data Block: Not Supported 00:12:15.538 Replay Protected Memory Block: Not Supported 00:12:15.538 00:12:15.538 Firmware Slot Information 00:12:15.538 ========================= 00:12:15.538 Active slot: 1 00:12:15.538 Slot 1 Firmware Revision: 1.0 00:12:15.538 00:12:15.538 00:12:15.538 Commands Supported and Effects 00:12:15.538 ============================== 00:12:15.538 Admin Commands 00:12:15.538 -------------- 00:12:15.539 Delete I/O Submission Queue (00h): Supported 00:12:15.539 Create I/O Submission Queue (01h): Supported 00:12:15.539 Get Log Page (02h): Supported 00:12:15.539 Delete I/O Completion Queue (04h): Supported 00:12:15.539 Create I/O Completion Queue (05h): Supported 00:12:15.539 Identify (06h): Supported 00:12:15.539 Abort (08h): Supported 00:12:15.539 Set Features (09h): Supported 00:12:15.539 Get Features (0Ah): Supported 00:12:15.539 Asynchronous Event Request (0Ch): Supported 00:12:15.539 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.539 Directive Send (19h): Supported 00:12:15.539 Directive Receive (1Ah): Supported 00:12:15.539 Virtualization Management (1Ch): Supported 00:12:15.539 Doorbell Buffer Config (7Ch): Supported 00:12:15.539 Format NVM (80h): Supported LBA-Change 00:12:15.539 I/O Commands 00:12:15.539 ------------ 00:12:15.539 Flush (00h): Supported LBA-Change 00:12:15.539 Write (01h): Supported LBA-Change 00:12:15.539 Read (02h): Supported 00:12:15.539 Compare (05h): Supported 00:12:15.539 Write Zeroes (08h): Supported LBA-Change 00:12:15.539 Dataset Management (09h): Supported LBA-Change 00:12:15.539 Unknown (0Ch): Supported 00:12:15.539 Unknown (12h): Supported 00:12:15.539 Copy (19h): Supported LBA-Change 00:12:15.539 Unknown (1Dh): Supported LBA-Change 00:12:15.539 00:12:15.539 Error Log 00:12:15.539 ========= 00:12:15.539 00:12:15.539 Arbitration 00:12:15.539 =========== 00:12:15.539 Arbitration Burst: no limit 00:12:15.539 00:12:15.539 Power Management 00:12:15.539 ================ 00:12:15.539 Number of Power States: 1 00:12:15.539 Current Power State: Power State #0 00:12:15.539 Power State #0: 00:12:15.539 Max Power: 25.00 W 00:12:15.539 Non-Operational State: Operational 00:12:15.539 Entry Latency: 16 microseconds 00:12:15.539 Exit Latency: 4 microseconds 00:12:15.539 Relative Read Throughput: 0 00:12:15.539 Relative Read Latency: 0 00:12:15.539 Relative Write Throughput: 0 00:12:15.539 Relative Write Latency: 0 00:12:15.539 Idle Power: Not Reported 00:12:15.539 Active Power: Not Reported 00:12:15.539 Non-Operational Permissive Mode: Not Supported 00:12:15.539 00:12:15.539 Health Information 00:12:15.539 ================== 00:12:15.539 Critical Warnings: 00:12:15.539 Available Spare Space: OK 00:12:15.539 Temperature: OK 00:12:15.539 Device Reliability: OK 00:12:15.539 Read Only: No 00:12:15.539 Volatile Memory Backup: OK 00:12:15.539 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.539 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.539 Available Spare: 0% 00:12:15.539 Available Spare Threshold: 0% 00:12:15.539 Life Percentage Used: 0% 00:12:15.539 Data Units Read: 783 00:12:15.539 Data Units Written: 712 00:12:15.539 Host Read Commands: 33011 00:12:15.539 Host Write Commands: 32434 00:12:15.539 Controller Busy Time: 0 minutes 00:12:15.539 Power Cycles: 0 00:12:15.539 Power On Hours: 0 hours 00:12:15.539 Unsafe Shutdowns: 0 00:12:15.539 Unrecoverable Media Errors: 0 00:12:15.539 Lifetime Error Log Entries: 0 00:12:15.539 Warning Temperature Time: 0 minutes 00:12:15.539 Critical Temperature Time: 0 minutes 00:12:15.539 00:12:15.539 Number of Queues 00:12:15.539 ================ 00:12:15.539 Number of I/O Submission Queues: 64 00:12:15.539 Number of I/O Completion Queues: 64 00:12:15.539 00:12:15.539 ZNS Specific Controller Data 00:12:15.539 ============================ 00:12:15.539 Zone Append Size Limit: 0 00:12:15.539 00:12:15.539 00:12:15.539 Active Namespaces 00:12:15.539 ================= 00:12:15.539 Namespace ID:1 00:12:15.539 Error Recovery Timeout: Unlimited 00:12:15.539 Command Set Identifier: NVM (00h) 00:12:15.539 Deallocate: Supported 00:12:15.539 Deallocated/Unwritten Error: Supported 00:12:15.539 Deallocated Read Value: All 0x00 00:12:15.539 Deallocate in Write Zeroes: Not Supported 00:12:15.539 Deallocated Guard Field: 0xFFFF 00:12:15.539 Flush: Supported 00:12:15.539 Reservation: Not Supported 00:12:15.539 Namespace Sharing Capabilities: Multiple Controllers 00:12:15.539 Size (in LBAs): 262144 (1GiB) 00:12:15.539 Capacity (in LBAs): 262144 (1GiB) 00:12:15.539 Utilization (in LBAs): 262144 (1GiB) 00:12:15.539 Thin Provisioning: Not Supported 00:12:15.539 Per-NS Atomic Units: No 00:12:15.539 Maximum Single Source Range Length: 128 00:12:15.539 Maximum Copy Length: 128 00:12:15.539 Maximum Source Range Count: 128 00:12:15.539 NGUID/EUI64 Never Reused: No 00:12:15.539 Namespace Write Protected: No 00:12:15.539 Endurance group ID: 1 00:12:15.539 Number of LBA Formats: 8 00:12:15.539 Current LBA Format: LBA Format #04 00:12:15.539 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.539 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.539 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.539 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.539 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.539 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.539 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.539 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.539 00:12:15.539 Get Feature FDP: 00:12:15.539 ================ 00:12:15.539 Enabled: Yes 00:12:15.539 FDP configuration index: 0 00:12:15.539 00:12:15.539 FDP configurations log page 00:12:15.539 =========================== 00:12:15.539 Number of FDP configurations: 1 00:12:15.539 Version: 0 00:12:15.539 Size: 112 00:12:15.539 FDP Configuration Descriptor: 0 00:12:15.539 Descriptor Size: 96 00:12:15.539 Reclaim Group Identifier format: 2 00:12:15.539 FDP Volatile Write Cache: Not Present 00:12:15.539 FDP Configuration: Valid 00:12:15.539 Vendor Specific Size: 0 00:12:15.539 Number of Reclaim Groups: 2 00:12:15.539 Number of Recalim Unit Handles: 8 00:12:15.539 Max Placement Identifiers: 128 00:12:15.539 Number of Namespaces Suppprted: 256 00:12:15.539 Reclaim unit Nominal Size: 6000000 bytes 00:12:15.539 Estimated Reclaim Unit Time Limit: Not Reported 00:12:15.539 RUH Desc #000: RUH Type: Initially Isolated 00:12:15.539 RUH Desc #001: RUH Type: Initially Isolated 00:12:15.539 RUH Desc #002: RUH Type: Initially Isolated 00:12:15.539 RUH Desc #003: RUH Type: Initially Isolated 00:12:15.539 RUH Desc #004: RUH Type: Initially Isolated 00:12:15.539 RUH Desc #005: RUH Type: Initially Isolated 00:12:15.539 RUH Desc #006: RUH Type: Initially Isolated 00:12:15.539 RUH Desc #007: RUH Type: Initially Isolated 00:12:15.539 00:12:15.539 FDP reclaim unit handle usage log page 00:12:15.539 ====================================== 00:12:15.539 Number of Reclaim Unit Handles: 8 00:12:15.539 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:15.539 RUH Usage Desc #001: RUH Attributes: Unused 00:12:15.539 RUH Usage Desc #002: RUH Attributes: Unused 00:12:15.539 RUH Usage Desc #003: RUH Attributes: Unused 00:12:15.539 RUH Usage Desc #004: RUH Attributes: Unused 00:12:15.539 RUH Usage Desc #005: RUH Attributes: Unused 00:12:15.539 RUH Usage Desc #006: RUH Attributes: Unused 00:12:15.539 RUH Usage Desc #007: RUH Attributes: Unused 00:12:15.539 00:12:15.539 FDP statistics log page 00:12:15.539 ======================= 00:12:15.539 Host bytes with metadata written: 441688064 00:12:15.539 Medi[2024-11-06 07:48:37.971826] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64442 terminated unexpected 00:12:15.539 a bytes with metadata written: 441753600 00:12:15.539 Media bytes erased: 0 00:12:15.539 00:12:15.539 FDP events log page 00:12:15.539 =================== 00:12:15.539 Number of FDP events: 0 00:12:15.539 00:12:15.539 NVM Specific Namespace Data 00:12:15.539 =========================== 00:12:15.539 Logical Block Storage Tag Mask: 0 00:12:15.539 Protection Information Capabilities: 00:12:15.539 16b Guard Protection Information Storage Tag Support: No 00:12:15.539 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.539 Storage Tag Check Read Support: No 00:12:15.539 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.539 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.539 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.539 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.539 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.539 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.539 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.539 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.539 ===================================================== 00:12:15.539 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:15.539 ===================================================== 00:12:15.539 Controller Capabilities/Features 00:12:15.539 ================================ 00:12:15.539 Vendor ID: 1b36 00:12:15.540 Subsystem Vendor ID: 1af4 00:12:15.540 Serial Number: 12342 00:12:15.540 Model Number: QEMU NVMe Ctrl 00:12:15.540 Firmware Version: 8.0.0 00:12:15.540 Recommended Arb Burst: 6 00:12:15.540 IEEE OUI Identifier: 00 54 52 00:12:15.540 Multi-path I/O 00:12:15.540 May have multiple subsystem ports: No 00:12:15.540 May have multiple controllers: No 00:12:15.540 Associated with SR-IOV VF: No 00:12:15.540 Max Data Transfer Size: 524288 00:12:15.540 Max Number of Namespaces: 256 00:12:15.540 Max Number of I/O Queues: 64 00:12:15.540 NVMe Specification Version (VS): 1.4 00:12:15.540 NVMe Specification Version (Identify): 1.4 00:12:15.540 Maximum Queue Entries: 2048 00:12:15.540 Contiguous Queues Required: Yes 00:12:15.540 Arbitration Mechanisms Supported 00:12:15.540 Weighted Round Robin: Not Supported 00:12:15.540 Vendor Specific: Not Supported 00:12:15.540 Reset Timeout: 7500 ms 00:12:15.540 Doorbell Stride: 4 bytes 00:12:15.540 NVM Subsystem Reset: Not Supported 00:12:15.540 Command Sets Supported 00:12:15.540 NVM Command Set: Supported 00:12:15.540 Boot Partition: Not Supported 00:12:15.540 Memory Page Size Minimum: 4096 bytes 00:12:15.540 Memory Page Size Maximum: 65536 bytes 00:12:15.540 Persistent Memory Region: Not Supported 00:12:15.540 Optional Asynchronous Events Supported 00:12:15.540 Namespace Attribute Notices: Supported 00:12:15.540 Firmware Activation Notices: Not Supported 00:12:15.540 ANA Change Notices: Not Supported 00:12:15.540 PLE Aggregate Log Change Notices: Not Supported 00:12:15.540 LBA Status Info Alert Notices: Not Supported 00:12:15.540 EGE Aggregate Log Change Notices: Not Supported 00:12:15.540 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.540 Zone Descriptor Change Notices: Not Supported 00:12:15.540 Discovery Log Change Notices: Not Supported 00:12:15.540 Controller Attributes 00:12:15.540 128-bit Host Identifier: Not Supported 00:12:15.540 Non-Operational Permissive Mode: Not Supported 00:12:15.540 NVM Sets: Not Supported 00:12:15.540 Read Recovery Levels: Not Supported 00:12:15.540 Endurance Groups: Not Supported 00:12:15.540 Predictable Latency Mode: Not Supported 00:12:15.540 Traffic Based Keep ALive: Not Supported 00:12:15.540 Namespace Granularity: Not Supported 00:12:15.540 SQ Associations: Not Supported 00:12:15.540 UUID List: Not Supported 00:12:15.540 Multi-Domain Subsystem: Not Supported 00:12:15.540 Fixed Capacity Management: Not Supported 00:12:15.540 Variable Capacity Management: Not Supported 00:12:15.540 Delete Endurance Group: Not Supported 00:12:15.540 Delete NVM Set: Not Supported 00:12:15.540 Extended LBA Formats Supported: Supported 00:12:15.540 Flexible Data Placement Supported: Not Supported 00:12:15.540 00:12:15.540 Controller Memory Buffer Support 00:12:15.540 ================================ 00:12:15.540 Supported: No 00:12:15.540 00:12:15.540 Persistent Memory Region Support 00:12:15.540 ================================ 00:12:15.540 Supported: No 00:12:15.540 00:12:15.540 Admin Command Set Attributes 00:12:15.540 ============================ 00:12:15.540 Security Send/Receive: Not Supported 00:12:15.540 Format NVM: Supported 00:12:15.540 Firmware Activate/Download: Not Supported 00:12:15.540 Namespace Management: Supported 00:12:15.540 Device Self-Test: Not Supported 00:12:15.540 Directives: Supported 00:12:15.540 NVMe-MI: Not Supported 00:12:15.540 Virtualization Management: Not Supported 00:12:15.540 Doorbell Buffer Config: Supported 00:12:15.540 Get LBA Status Capability: Not Supported 00:12:15.540 Command & Feature Lockdown Capability: Not Supported 00:12:15.540 Abort Command Limit: 4 00:12:15.540 Async Event Request Limit: 4 00:12:15.540 Number of Firmware Slots: N/A 00:12:15.540 Firmware Slot 1 Read-Only: N/A 00:12:15.540 Firmware Activation Without Reset: N/A 00:12:15.540 Multiple Update Detection Support: N/A 00:12:15.540 Firmware Update Granularity: No Information Provided 00:12:15.540 Per-Namespace SMART Log: Yes 00:12:15.540 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.540 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:15.540 Command Effects Log Page: Supported 00:12:15.540 Get Log Page Extended Data: Supported 00:12:15.540 Telemetry Log Pages: Not Supported 00:12:15.540 Persistent Event Log Pages: Not Supported 00:12:15.540 Supported Log Pages Log Page: May Support 00:12:15.540 Commands Supported & Effects Log Page: Not Supported 00:12:15.540 Feature Identifiers & Effects Log Page:May Support 00:12:15.540 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.540 Data Area 4 for Telemetry Log: Not Supported 00:12:15.540 Error Log Page Entries Supported: 1 00:12:15.540 Keep Alive: Not Supported 00:12:15.540 00:12:15.540 NVM Command Set Attributes 00:12:15.540 ========================== 00:12:15.540 Submission Queue Entry Size 00:12:15.540 Max: 64 00:12:15.540 Min: 64 00:12:15.540 Completion Queue Entry Size 00:12:15.540 Max: 16 00:12:15.540 Min: 16 00:12:15.540 Number of Namespaces: 256 00:12:15.540 Compare Command: Supported 00:12:15.540 Write Uncorrectable Command: Not Supported 00:12:15.540 Dataset Management Command: Supported 00:12:15.540 Write Zeroes Command: Supported 00:12:15.540 Set Features Save Field: Supported 00:12:15.540 Reservations: Not Supported 00:12:15.540 Timestamp: Supported 00:12:15.540 Copy: Supported 00:12:15.540 Volatile Write Cache: Present 00:12:15.540 Atomic Write Unit (Normal): 1 00:12:15.540 Atomic Write Unit (PFail): 1 00:12:15.540 Atomic Compare & Write Unit: 1 00:12:15.540 Fused Compare & Write: Not Supported 00:12:15.540 Scatter-Gather List 00:12:15.540 SGL Command Set: Supported 00:12:15.540 SGL Keyed: Not Supported 00:12:15.540 SGL Bit Bucket Descriptor: Not Supported 00:12:15.540 SGL Metadata Pointer: Not Supported 00:12:15.540 Oversized SGL: Not Supported 00:12:15.540 SGL Metadata Address: Not Supported 00:12:15.540 SGL Offset: Not Supported 00:12:15.540 Transport SGL Data Block: Not Supported 00:12:15.540 Replay Protected Memory Block: Not Supported 00:12:15.540 00:12:15.540 Firmware Slot Information 00:12:15.540 ========================= 00:12:15.540 Active slot: 1 00:12:15.540 Slot 1 Firmware Revision: 1.0 00:12:15.540 00:12:15.540 00:12:15.540 Commands Supported and Effects 00:12:15.540 ============================== 00:12:15.540 Admin Commands 00:12:15.540 -------------- 00:12:15.540 Delete I/O Submission Queue (00h): Supported 00:12:15.540 Create I/O Submission Queue (01h): Supported 00:12:15.540 Get Log Page (02h): Supported 00:12:15.540 Delete I/O Completion Queue (04h): Supported 00:12:15.540 Create I/O Completion Queue (05h): Supported 00:12:15.541 Identify (06h): Supported 00:12:15.541 Abort (08h): Supported 00:12:15.541 Set Features (09h): Supported 00:12:15.541 Get Features (0Ah): Supported 00:12:15.541 Asynchronous Event Request (0Ch): Supported 00:12:15.541 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.541 Directive Send (19h): Supported 00:12:15.541 Directive Receive (1Ah): Supported 00:12:15.541 Virtualization Management (1Ch): Supported 00:12:15.541 Doorbell Buffer Config (7Ch): Supported 00:12:15.541 Format NVM (80h): Supported LBA-Change 00:12:15.541 I/O Commands 00:12:15.541 ------------ 00:12:15.541 Flush (00h): Supported LBA-Change 00:12:15.541 Write (01h): Supported LBA-Change 00:12:15.541 Read (02h): Supported 00:12:15.541 Compare (05h): Supported 00:12:15.541 Write Zeroes (08h): Supported LBA-Change 00:12:15.541 Dataset Management (09h): Supported LBA-Change 00:12:15.541 Unknown (0Ch): Supported 00:12:15.541 Unknown (12h): Supported 00:12:15.541 Copy (19h): Supported LBA-Change 00:12:15.541 Unknown (1Dh): Supported LBA-Change 00:12:15.541 00:12:15.541 Error Log 00:12:15.541 ========= 00:12:15.541 00:12:15.541 Arbitration 00:12:15.541 =========== 00:12:15.541 Arbitration Burst: no limit 00:12:15.541 00:12:15.541 Power Management 00:12:15.541 ================ 00:12:15.541 Number of Power States: 1 00:12:15.541 Current Power State: Power State #0 00:12:15.541 Power State #0: 00:12:15.541 Max Power: 25.00 W 00:12:15.541 Non-Operational State: Operational 00:12:15.541 Entry Latency: 16 microseconds 00:12:15.541 Exit Latency: 4 microseconds 00:12:15.541 Relative Read Throughput: 0 00:12:15.541 Relative Read Latency: 0 00:12:15.541 Relative Write Throughput: 0 00:12:15.541 Relative Write Latency: 0 00:12:15.541 Idle Power: Not Reported 00:12:15.541 Active Power: Not Reported 00:12:15.541 Non-Operational Permissive Mode: Not Supported 00:12:15.541 00:12:15.541 Health Information 00:12:15.541 ================== 00:12:15.541 Critical Warnings: 00:12:15.541 Available Spare Space: OK 00:12:15.541 Temperature: OK 00:12:15.541 Device Reliability: OK 00:12:15.541 Read Only: No 00:12:15.541 Volatile Memory Backup: OK 00:12:15.541 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.541 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.541 Available Spare: 0% 00:12:15.541 Available Spare Threshold: 0% 00:12:15.541 Life Percentage Used: 0% 00:12:15.541 Data Units Read: 2136 00:12:15.541 Data Units Written: 1924 00:12:15.541 Host Read Commands: 96851 00:12:15.541 Host Write Commands: 95120 00:12:15.541 Controller Busy Time: 0 minutes 00:12:15.541 Power Cycles: 0 00:12:15.541 Power On Hours: 0 hours 00:12:15.541 Unsafe Shutdowns: 0 00:12:15.541 Unrecoverable Media Errors: 0 00:12:15.541 Lifetime Error Log Entries: 0 00:12:15.541 Warning Temperature Time: 0 minutes 00:12:15.541 Critical Temperature Time: 0 minutes 00:12:15.541 00:12:15.541 Number of Queues 00:12:15.541 ================ 00:12:15.541 Number of I/O Submission Queues: 64 00:12:15.541 Number of I/O Completion Queues: 64 00:12:15.541 00:12:15.541 ZNS Specific Controller Data 00:12:15.541 ============================ 00:12:15.541 Zone Append Size Limit: 0 00:12:15.541 00:12:15.541 00:12:15.541 Active Namespaces 00:12:15.541 ================= 00:12:15.541 Namespace ID:1 00:12:15.541 Error Recovery Timeout: Unlimited 00:12:15.541 Command Set Identifier: NVM (00h) 00:12:15.541 Deallocate: Supported 00:12:15.541 Deallocated/Unwritten Error: Supported 00:12:15.541 Deallocated Read Value: All 0x00 00:12:15.541 Deallocate in Write Zeroes: Not Supported 00:12:15.541 Deallocated Guard Field: 0xFFFF 00:12:15.541 Flush: Supported 00:12:15.541 Reservation: Not Supported 00:12:15.541 Namespace Sharing Capabilities: Private 00:12:15.541 Size (in LBAs): 1048576 (4GiB) 00:12:15.541 Capacity (in LBAs): 1048576 (4GiB) 00:12:15.541 Utilization (in LBAs): 1048576 (4GiB) 00:12:15.541 Thin Provisioning: Not Supported 00:12:15.541 Per-NS Atomic Units: No 00:12:15.541 Maximum Single Source Range Length: 128 00:12:15.541 Maximum Copy Length: 128 00:12:15.541 Maximum Source Range Count: 128 00:12:15.541 NGUID/EUI64 Never Reused: No 00:12:15.541 Namespace Write Protected: No 00:12:15.541 Number of LBA Formats: 8 00:12:15.541 Current LBA Format: LBA Format #04 00:12:15.541 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.541 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.541 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.541 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.541 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.541 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.541 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.541 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.541 00:12:15.541 NVM Specific Namespace Data 00:12:15.541 =========================== 00:12:15.541 Logical Block Storage Tag Mask: 0 00:12:15.541 Protection Information Capabilities: 00:12:15.541 16b Guard Protection Information Storage Tag Support: No 00:12:15.541 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.541 Storage Tag Check Read Support: No 00:12:15.541 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Namespace ID:2 00:12:15.541 Error Recovery Timeout: Unlimited 00:12:15.541 Command Set Identifier: NVM (00h) 00:12:15.541 Deallocate: Supported 00:12:15.541 Deallocated/Unwritten Error: Supported 00:12:15.541 Deallocated Read Value: All 0x00 00:12:15.541 Deallocate in Write Zeroes: Not Supported 00:12:15.541 Deallocated Guard Field: 0xFFFF 00:12:15.541 Flush: Supported 00:12:15.541 Reservation: Not Supported 00:12:15.541 Namespace Sharing Capabilities: Private 00:12:15.541 Size (in LBAs): 1048576 (4GiB) 00:12:15.541 Capacity (in LBAs): 1048576 (4GiB) 00:12:15.541 Utilization (in LBAs): 1048576 (4GiB) 00:12:15.541 Thin Provisioning: Not Supported 00:12:15.541 Per-NS Atomic Units: No 00:12:15.541 Maximum Single Source Range Length: 128 00:12:15.541 Maximum Copy Length: 128 00:12:15.541 Maximum Source Range Count: 128 00:12:15.541 NGUID/EUI64 Never Reused: No 00:12:15.541 Namespace Write Protected: No 00:12:15.541 Number of LBA Formats: 8 00:12:15.541 Current LBA Format: LBA Format #04 00:12:15.541 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.541 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.541 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.541 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.541 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.541 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.541 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.541 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.541 00:12:15.541 NVM Specific Namespace Data 00:12:15.541 =========================== 00:12:15.541 Logical Block Storage Tag Mask: 0 00:12:15.541 Protection Information Capabilities: 00:12:15.541 16b Guard Protection Information Storage Tag Support: No 00:12:15.541 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.541 Storage Tag Check Read Support: No 00:12:15.541 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.541 Namespace ID:3 00:12:15.541 Error Recovery Timeout: Unlimited 00:12:15.541 Command Set Identifier: NVM (00h) 00:12:15.541 Deallocate: Supported 00:12:15.542 Deallocated/Unwritten Error: Supported 00:12:15.542 Deallocated Read Value: All 0x00 00:12:15.542 Deallocate in Write Zeroes: Not Supported 00:12:15.542 Deallocated Guard Field: 0xFFFF 00:12:15.542 Flush: Supported 00:12:15.542 Reservation: Not Supported 00:12:15.542 Namespace Sharing Capabilities: Private 00:12:15.542 Size (in LBAs): 1048576 (4GiB) 00:12:15.542 Capacity (in LBAs): 1048576 (4GiB) 00:12:15.542 Utilization (in LBAs): 1048576 (4GiB) 00:12:15.542 Thin Provisioning: Not Supported 00:12:15.542 Per-NS Atomic Units: No 00:12:15.542 Maximum Single Source Range Length: 128 00:12:15.542 Maximum Copy Length: 128 00:12:15.542 Maximum Source Range Count: 128 00:12:15.542 NGUID/EUI64 Never Reused: No 00:12:15.542 Namespace Write Protected: No 00:12:15.542 Number of LBA Formats: 8 00:12:15.542 Current LBA Format: LBA Format #04 00:12:15.542 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.542 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.542 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.542 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.542 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.542 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.542 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.542 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.542 00:12:15.542 NVM Specific Namespace Data 00:12:15.542 =========================== 00:12:15.542 Logical Block Storage Tag Mask: 0 00:12:15.542 Protection Information Capabilities: 00:12:15.542 16b Guard Protection Information Storage Tag Support: No 00:12:15.542 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.542 Storage Tag Check Read Support: No 00:12:15.542 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.542 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.542 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.542 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.542 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.542 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.542 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.542 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.542 07:48:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:15.542 07:48:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:12:15.802 ===================================================== 00:12:15.802 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:15.802 ===================================================== 00:12:15.802 Controller Capabilities/Features 00:12:15.802 ================================ 00:12:15.802 Vendor ID: 1b36 00:12:15.802 Subsystem Vendor ID: 1af4 00:12:15.802 Serial Number: 12340 00:12:15.802 Model Number: QEMU NVMe Ctrl 00:12:15.802 Firmware Version: 8.0.0 00:12:15.802 Recommended Arb Burst: 6 00:12:15.802 IEEE OUI Identifier: 00 54 52 00:12:15.802 Multi-path I/O 00:12:15.802 May have multiple subsystem ports: No 00:12:15.802 May have multiple controllers: No 00:12:15.802 Associated with SR-IOV VF: No 00:12:15.802 Max Data Transfer Size: 524288 00:12:15.802 Max Number of Namespaces: 256 00:12:15.802 Max Number of I/O Queues: 64 00:12:15.802 NVMe Specification Version (VS): 1.4 00:12:15.802 NVMe Specification Version (Identify): 1.4 00:12:15.802 Maximum Queue Entries: 2048 00:12:15.802 Contiguous Queues Required: Yes 00:12:15.802 Arbitration Mechanisms Supported 00:12:15.802 Weighted Round Robin: Not Supported 00:12:15.802 Vendor Specific: Not Supported 00:12:15.802 Reset Timeout: 7500 ms 00:12:15.802 Doorbell Stride: 4 bytes 00:12:15.802 NVM Subsystem Reset: Not Supported 00:12:15.802 Command Sets Supported 00:12:15.802 NVM Command Set: Supported 00:12:15.802 Boot Partition: Not Supported 00:12:15.802 Memory Page Size Minimum: 4096 bytes 00:12:15.802 Memory Page Size Maximum: 65536 bytes 00:12:15.802 Persistent Memory Region: Not Supported 00:12:15.802 Optional Asynchronous Events Supported 00:12:15.802 Namespace Attribute Notices: Supported 00:12:15.802 Firmware Activation Notices: Not Supported 00:12:15.802 ANA Change Notices: Not Supported 00:12:15.802 PLE Aggregate Log Change Notices: Not Supported 00:12:15.802 LBA Status Info Alert Notices: Not Supported 00:12:15.802 EGE Aggregate Log Change Notices: Not Supported 00:12:15.802 Normal NVM Subsystem Shutdown event: Not Supported 00:12:15.802 Zone Descriptor Change Notices: Not Supported 00:12:15.802 Discovery Log Change Notices: Not Supported 00:12:15.802 Controller Attributes 00:12:15.802 128-bit Host Identifier: Not Supported 00:12:15.802 Non-Operational Permissive Mode: Not Supported 00:12:15.802 NVM Sets: Not Supported 00:12:15.802 Read Recovery Levels: Not Supported 00:12:15.802 Endurance Groups: Not Supported 00:12:15.802 Predictable Latency Mode: Not Supported 00:12:15.802 Traffic Based Keep ALive: Not Supported 00:12:15.802 Namespace Granularity: Not Supported 00:12:15.802 SQ Associations: Not Supported 00:12:15.802 UUID List: Not Supported 00:12:15.802 Multi-Domain Subsystem: Not Supported 00:12:15.802 Fixed Capacity Management: Not Supported 00:12:15.802 Variable Capacity Management: Not Supported 00:12:15.802 Delete Endurance Group: Not Supported 00:12:15.802 Delete NVM Set: Not Supported 00:12:15.802 Extended LBA Formats Supported: Supported 00:12:15.802 Flexible Data Placement Supported: Not Supported 00:12:15.802 00:12:15.802 Controller Memory Buffer Support 00:12:15.802 ================================ 00:12:15.802 Supported: No 00:12:15.802 00:12:15.802 Persistent Memory Region Support 00:12:15.802 ================================ 00:12:15.802 Supported: No 00:12:15.802 00:12:15.802 Admin Command Set Attributes 00:12:15.802 ============================ 00:12:15.802 Security Send/Receive: Not Supported 00:12:15.802 Format NVM: Supported 00:12:15.802 Firmware Activate/Download: Not Supported 00:12:15.802 Namespace Management: Supported 00:12:15.802 Device Self-Test: Not Supported 00:12:15.802 Directives: Supported 00:12:15.802 NVMe-MI: Not Supported 00:12:15.802 Virtualization Management: Not Supported 00:12:15.802 Doorbell Buffer Config: Supported 00:12:15.802 Get LBA Status Capability: Not Supported 00:12:15.802 Command & Feature Lockdown Capability: Not Supported 00:12:15.802 Abort Command Limit: 4 00:12:15.802 Async Event Request Limit: 4 00:12:15.802 Number of Firmware Slots: N/A 00:12:15.802 Firmware Slot 1 Read-Only: N/A 00:12:15.802 Firmware Activation Without Reset: N/A 00:12:15.802 Multiple Update Detection Support: N/A 00:12:15.802 Firmware Update Granularity: No Information Provided 00:12:15.802 Per-Namespace SMART Log: Yes 00:12:15.802 Asymmetric Namespace Access Log Page: Not Supported 00:12:15.802 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:15.802 Command Effects Log Page: Supported 00:12:15.802 Get Log Page Extended Data: Supported 00:12:15.802 Telemetry Log Pages: Not Supported 00:12:15.802 Persistent Event Log Pages: Not Supported 00:12:15.802 Supported Log Pages Log Page: May Support 00:12:15.803 Commands Supported & Effects Log Page: Not Supported 00:12:15.803 Feature Identifiers & Effects Log Page:May Support 00:12:15.803 NVMe-MI Commands & Effects Log Page: May Support 00:12:15.803 Data Area 4 for Telemetry Log: Not Supported 00:12:15.803 Error Log Page Entries Supported: 1 00:12:15.803 Keep Alive: Not Supported 00:12:15.803 00:12:15.803 NVM Command Set Attributes 00:12:15.803 ========================== 00:12:15.803 Submission Queue Entry Size 00:12:15.803 Max: 64 00:12:15.803 Min: 64 00:12:15.803 Completion Queue Entry Size 00:12:15.803 Max: 16 00:12:15.803 Min: 16 00:12:15.803 Number of Namespaces: 256 00:12:15.803 Compare Command: Supported 00:12:15.803 Write Uncorrectable Command: Not Supported 00:12:15.803 Dataset Management Command: Supported 00:12:15.803 Write Zeroes Command: Supported 00:12:15.803 Set Features Save Field: Supported 00:12:15.803 Reservations: Not Supported 00:12:15.803 Timestamp: Supported 00:12:15.803 Copy: Supported 00:12:15.803 Volatile Write Cache: Present 00:12:15.803 Atomic Write Unit (Normal): 1 00:12:15.803 Atomic Write Unit (PFail): 1 00:12:15.803 Atomic Compare & Write Unit: 1 00:12:15.803 Fused Compare & Write: Not Supported 00:12:15.803 Scatter-Gather List 00:12:15.803 SGL Command Set: Supported 00:12:15.803 SGL Keyed: Not Supported 00:12:15.803 SGL Bit Bucket Descriptor: Not Supported 00:12:15.803 SGL Metadata Pointer: Not Supported 00:12:15.803 Oversized SGL: Not Supported 00:12:15.803 SGL Metadata Address: Not Supported 00:12:15.803 SGL Offset: Not Supported 00:12:15.803 Transport SGL Data Block: Not Supported 00:12:15.803 Replay Protected Memory Block: Not Supported 00:12:15.803 00:12:15.803 Firmware Slot Information 00:12:15.803 ========================= 00:12:15.803 Active slot: 1 00:12:15.803 Slot 1 Firmware Revision: 1.0 00:12:15.803 00:12:15.803 00:12:15.803 Commands Supported and Effects 00:12:15.803 ============================== 00:12:15.803 Admin Commands 00:12:15.803 -------------- 00:12:15.803 Delete I/O Submission Queue (00h): Supported 00:12:15.803 Create I/O Submission Queue (01h): Supported 00:12:15.803 Get Log Page (02h): Supported 00:12:15.803 Delete I/O Completion Queue (04h): Supported 00:12:15.803 Create I/O Completion Queue (05h): Supported 00:12:15.803 Identify (06h): Supported 00:12:15.803 Abort (08h): Supported 00:12:15.803 Set Features (09h): Supported 00:12:15.803 Get Features (0Ah): Supported 00:12:15.803 Asynchronous Event Request (0Ch): Supported 00:12:15.803 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:15.803 Directive Send (19h): Supported 00:12:15.803 Directive Receive (1Ah): Supported 00:12:15.803 Virtualization Management (1Ch): Supported 00:12:15.803 Doorbell Buffer Config (7Ch): Supported 00:12:15.803 Format NVM (80h): Supported LBA-Change 00:12:15.803 I/O Commands 00:12:15.803 ------------ 00:12:15.803 Flush (00h): Supported LBA-Change 00:12:15.803 Write (01h): Supported LBA-Change 00:12:15.803 Read (02h): Supported 00:12:15.803 Compare (05h): Supported 00:12:15.803 Write Zeroes (08h): Supported LBA-Change 00:12:15.803 Dataset Management (09h): Supported LBA-Change 00:12:15.803 Unknown (0Ch): Supported 00:12:15.803 Unknown (12h): Supported 00:12:15.803 Copy (19h): Supported LBA-Change 00:12:15.803 Unknown (1Dh): Supported LBA-Change 00:12:15.803 00:12:15.803 Error Log 00:12:15.803 ========= 00:12:15.803 00:12:15.803 Arbitration 00:12:15.803 =========== 00:12:15.803 Arbitration Burst: no limit 00:12:15.803 00:12:15.803 Power Management 00:12:15.803 ================ 00:12:15.803 Number of Power States: 1 00:12:15.803 Current Power State: Power State #0 00:12:15.803 Power State #0: 00:12:15.803 Max Power: 25.00 W 00:12:15.803 Non-Operational State: Operational 00:12:15.803 Entry Latency: 16 microseconds 00:12:15.803 Exit Latency: 4 microseconds 00:12:15.803 Relative Read Throughput: 0 00:12:15.803 Relative Read Latency: 0 00:12:15.803 Relative Write Throughput: 0 00:12:15.803 Relative Write Latency: 0 00:12:15.803 Idle Power: Not Reported 00:12:15.803 Active Power: Not Reported 00:12:15.803 Non-Operational Permissive Mode: Not Supported 00:12:15.803 00:12:15.803 Health Information 00:12:15.803 ================== 00:12:15.803 Critical Warnings: 00:12:15.803 Available Spare Space: OK 00:12:15.803 Temperature: OK 00:12:15.803 Device Reliability: OK 00:12:15.803 Read Only: No 00:12:15.803 Volatile Memory Backup: OK 00:12:15.803 Current Temperature: 323 Kelvin (50 Celsius) 00:12:15.803 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:15.803 Available Spare: 0% 00:12:15.803 Available Spare Threshold: 0% 00:12:15.803 Life Percentage Used: 0% 00:12:15.803 Data Units Read: 676 00:12:15.803 Data Units Written: 604 00:12:15.803 Host Read Commands: 31701 00:12:15.803 Host Write Commands: 31487 00:12:15.803 Controller Busy Time: 0 minutes 00:12:15.803 Power Cycles: 0 00:12:15.803 Power On Hours: 0 hours 00:12:15.803 Unsafe Shutdowns: 0 00:12:15.803 Unrecoverable Media Errors: 0 00:12:15.803 Lifetime Error Log Entries: 0 00:12:15.803 Warning Temperature Time: 0 minutes 00:12:15.803 Critical Temperature Time: 0 minutes 00:12:15.803 00:12:15.803 Number of Queues 00:12:15.803 ================ 00:12:15.803 Number of I/O Submission Queues: 64 00:12:15.803 Number of I/O Completion Queues: 64 00:12:15.803 00:12:15.803 ZNS Specific Controller Data 00:12:15.803 ============================ 00:12:15.803 Zone Append Size Limit: 0 00:12:15.803 00:12:15.803 00:12:15.803 Active Namespaces 00:12:15.803 ================= 00:12:15.803 Namespace ID:1 00:12:15.803 Error Recovery Timeout: Unlimited 00:12:15.803 Command Set Identifier: NVM (00h) 00:12:15.803 Deallocate: Supported 00:12:15.803 Deallocated/Unwritten Error: Supported 00:12:15.803 Deallocated Read Value: All 0x00 00:12:15.803 Deallocate in Write Zeroes: Not Supported 00:12:15.803 Deallocated Guard Field: 0xFFFF 00:12:15.803 Flush: Supported 00:12:15.803 Reservation: Not Supported 00:12:15.803 Metadata Transferred as: Separate Metadata Buffer 00:12:15.803 Namespace Sharing Capabilities: Private 00:12:15.803 Size (in LBAs): 1548666 (5GiB) 00:12:15.803 Capacity (in LBAs): 1548666 (5GiB) 00:12:15.803 Utilization (in LBAs): 1548666 (5GiB) 00:12:15.803 Thin Provisioning: Not Supported 00:12:15.803 Per-NS Atomic Units: No 00:12:15.803 Maximum Single Source Range Length: 128 00:12:15.803 Maximum Copy Length: 128 00:12:15.803 Maximum Source Range Count: 128 00:12:15.803 NGUID/EUI64 Never Reused: No 00:12:15.803 Namespace Write Protected: No 00:12:15.803 Number of LBA Formats: 8 00:12:15.803 Current LBA Format: LBA Format #07 00:12:15.803 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:15.803 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:15.804 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:15.804 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:15.804 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:15.804 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:15.804 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:15.804 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:15.804 00:12:15.804 NVM Specific Namespace Data 00:12:15.804 =========================== 00:12:15.804 Logical Block Storage Tag Mask: 0 00:12:15.804 Protection Information Capabilities: 00:12:15.804 16b Guard Protection Information Storage Tag Support: No 00:12:15.804 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:15.804 Storage Tag Check Read Support: No 00:12:15.804 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.804 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.804 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.804 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.804 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.804 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.804 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.804 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:15.804 07:48:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:15.804 07:48:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:12:16.063 ===================================================== 00:12:16.063 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:16.063 ===================================================== 00:12:16.063 Controller Capabilities/Features 00:12:16.063 ================================ 00:12:16.063 Vendor ID: 1b36 00:12:16.063 Subsystem Vendor ID: 1af4 00:12:16.063 Serial Number: 12341 00:12:16.063 Model Number: QEMU NVMe Ctrl 00:12:16.063 Firmware Version: 8.0.0 00:12:16.063 Recommended Arb Burst: 6 00:12:16.063 IEEE OUI Identifier: 00 54 52 00:12:16.063 Multi-path I/O 00:12:16.063 May have multiple subsystem ports: No 00:12:16.063 May have multiple controllers: No 00:12:16.063 Associated with SR-IOV VF: No 00:12:16.063 Max Data Transfer Size: 524288 00:12:16.063 Max Number of Namespaces: 256 00:12:16.063 Max Number of I/O Queues: 64 00:12:16.063 NVMe Specification Version (VS): 1.4 00:12:16.063 NVMe Specification Version (Identify): 1.4 00:12:16.063 Maximum Queue Entries: 2048 00:12:16.063 Contiguous Queues Required: Yes 00:12:16.063 Arbitration Mechanisms Supported 00:12:16.063 Weighted Round Robin: Not Supported 00:12:16.063 Vendor Specific: Not Supported 00:12:16.063 Reset Timeout: 7500 ms 00:12:16.063 Doorbell Stride: 4 bytes 00:12:16.063 NVM Subsystem Reset: Not Supported 00:12:16.063 Command Sets Supported 00:12:16.063 NVM Command Set: Supported 00:12:16.063 Boot Partition: Not Supported 00:12:16.063 Memory Page Size Minimum: 4096 bytes 00:12:16.063 Memory Page Size Maximum: 65536 bytes 00:12:16.063 Persistent Memory Region: Not Supported 00:12:16.063 Optional Asynchronous Events Supported 00:12:16.063 Namespace Attribute Notices: Supported 00:12:16.063 Firmware Activation Notices: Not Supported 00:12:16.063 ANA Change Notices: Not Supported 00:12:16.063 PLE Aggregate Log Change Notices: Not Supported 00:12:16.063 LBA Status Info Alert Notices: Not Supported 00:12:16.063 EGE Aggregate Log Change Notices: Not Supported 00:12:16.063 Normal NVM Subsystem Shutdown event: Not Supported 00:12:16.063 Zone Descriptor Change Notices: Not Supported 00:12:16.063 Discovery Log Change Notices: Not Supported 00:12:16.063 Controller Attributes 00:12:16.063 128-bit Host Identifier: Not Supported 00:12:16.063 Non-Operational Permissive Mode: Not Supported 00:12:16.063 NVM Sets: Not Supported 00:12:16.063 Read Recovery Levels: Not Supported 00:12:16.063 Endurance Groups: Not Supported 00:12:16.063 Predictable Latency Mode: Not Supported 00:12:16.063 Traffic Based Keep ALive: Not Supported 00:12:16.063 Namespace Granularity: Not Supported 00:12:16.063 SQ Associations: Not Supported 00:12:16.063 UUID List: Not Supported 00:12:16.063 Multi-Domain Subsystem: Not Supported 00:12:16.063 Fixed Capacity Management: Not Supported 00:12:16.063 Variable Capacity Management: Not Supported 00:12:16.063 Delete Endurance Group: Not Supported 00:12:16.063 Delete NVM Set: Not Supported 00:12:16.063 Extended LBA Formats Supported: Supported 00:12:16.063 Flexible Data Placement Supported: Not Supported 00:12:16.063 00:12:16.063 Controller Memory Buffer Support 00:12:16.063 ================================ 00:12:16.063 Supported: No 00:12:16.063 00:12:16.063 Persistent Memory Region Support 00:12:16.063 ================================ 00:12:16.063 Supported: No 00:12:16.063 00:12:16.063 Admin Command Set Attributes 00:12:16.063 ============================ 00:12:16.063 Security Send/Receive: Not Supported 00:12:16.063 Format NVM: Supported 00:12:16.063 Firmware Activate/Download: Not Supported 00:12:16.063 Namespace Management: Supported 00:12:16.063 Device Self-Test: Not Supported 00:12:16.063 Directives: Supported 00:12:16.063 NVMe-MI: Not Supported 00:12:16.063 Virtualization Management: Not Supported 00:12:16.063 Doorbell Buffer Config: Supported 00:12:16.063 Get LBA Status Capability: Not Supported 00:12:16.063 Command & Feature Lockdown Capability: Not Supported 00:12:16.063 Abort Command Limit: 4 00:12:16.063 Async Event Request Limit: 4 00:12:16.063 Number of Firmware Slots: N/A 00:12:16.063 Firmware Slot 1 Read-Only: N/A 00:12:16.063 Firmware Activation Without Reset: N/A 00:12:16.063 Multiple Update Detection Support: N/A 00:12:16.063 Firmware Update Granularity: No Information Provided 00:12:16.063 Per-Namespace SMART Log: Yes 00:12:16.063 Asymmetric Namespace Access Log Page: Not Supported 00:12:16.063 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:12:16.063 Command Effects Log Page: Supported 00:12:16.063 Get Log Page Extended Data: Supported 00:12:16.063 Telemetry Log Pages: Not Supported 00:12:16.063 Persistent Event Log Pages: Not Supported 00:12:16.063 Supported Log Pages Log Page: May Support 00:12:16.063 Commands Supported & Effects Log Page: Not Supported 00:12:16.063 Feature Identifiers & Effects Log Page:May Support 00:12:16.063 NVMe-MI Commands & Effects Log Page: May Support 00:12:16.063 Data Area 4 for Telemetry Log: Not Supported 00:12:16.063 Error Log Page Entries Supported: 1 00:12:16.063 Keep Alive: Not Supported 00:12:16.063 00:12:16.063 NVM Command Set Attributes 00:12:16.063 ========================== 00:12:16.063 Submission Queue Entry Size 00:12:16.063 Max: 64 00:12:16.063 Min: 64 00:12:16.063 Completion Queue Entry Size 00:12:16.063 Max: 16 00:12:16.063 Min: 16 00:12:16.063 Number of Namespaces: 256 00:12:16.063 Compare Command: Supported 00:12:16.063 Write Uncorrectable Command: Not Supported 00:12:16.063 Dataset Management Command: Supported 00:12:16.063 Write Zeroes Command: Supported 00:12:16.063 Set Features Save Field: Supported 00:12:16.063 Reservations: Not Supported 00:12:16.063 Timestamp: Supported 00:12:16.063 Copy: Supported 00:12:16.063 Volatile Write Cache: Present 00:12:16.063 Atomic Write Unit (Normal): 1 00:12:16.063 Atomic Write Unit (PFail): 1 00:12:16.063 Atomic Compare & Write Unit: 1 00:12:16.063 Fused Compare & Write: Not Supported 00:12:16.063 Scatter-Gather List 00:12:16.063 SGL Command Set: Supported 00:12:16.063 SGL Keyed: Not Supported 00:12:16.063 SGL Bit Bucket Descriptor: Not Supported 00:12:16.063 SGL Metadata Pointer: Not Supported 00:12:16.063 Oversized SGL: Not Supported 00:12:16.063 SGL Metadata Address: Not Supported 00:12:16.063 SGL Offset: Not Supported 00:12:16.063 Transport SGL Data Block: Not Supported 00:12:16.063 Replay Protected Memory Block: Not Supported 00:12:16.063 00:12:16.063 Firmware Slot Information 00:12:16.063 ========================= 00:12:16.063 Active slot: 1 00:12:16.063 Slot 1 Firmware Revision: 1.0 00:12:16.063 00:12:16.063 00:12:16.063 Commands Supported and Effects 00:12:16.063 ============================== 00:12:16.063 Admin Commands 00:12:16.063 -------------- 00:12:16.063 Delete I/O Submission Queue (00h): Supported 00:12:16.063 Create I/O Submission Queue (01h): Supported 00:12:16.063 Get Log Page (02h): Supported 00:12:16.063 Delete I/O Completion Queue (04h): Supported 00:12:16.063 Create I/O Completion Queue (05h): Supported 00:12:16.063 Identify (06h): Supported 00:12:16.063 Abort (08h): Supported 00:12:16.063 Set Features (09h): Supported 00:12:16.063 Get Features (0Ah): Supported 00:12:16.064 Asynchronous Event Request (0Ch): Supported 00:12:16.064 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:16.064 Directive Send (19h): Supported 00:12:16.064 Directive Receive (1Ah): Supported 00:12:16.064 Virtualization Management (1Ch): Supported 00:12:16.064 Doorbell Buffer Config (7Ch): Supported 00:12:16.064 Format NVM (80h): Supported LBA-Change 00:12:16.064 I/O Commands 00:12:16.064 ------------ 00:12:16.064 Flush (00h): Supported LBA-Change 00:12:16.064 Write (01h): Supported LBA-Change 00:12:16.064 Read (02h): Supported 00:12:16.064 Compare (05h): Supported 00:12:16.064 Write Zeroes (08h): Supported LBA-Change 00:12:16.064 Dataset Management (09h): Supported LBA-Change 00:12:16.064 Unknown (0Ch): Supported 00:12:16.064 Unknown (12h): Supported 00:12:16.064 Copy (19h): Supported LBA-Change 00:12:16.064 Unknown (1Dh): Supported LBA-Change 00:12:16.064 00:12:16.064 Error Log 00:12:16.064 ========= 00:12:16.064 00:12:16.064 Arbitration 00:12:16.064 =========== 00:12:16.064 Arbitration Burst: no limit 00:12:16.064 00:12:16.064 Power Management 00:12:16.064 ================ 00:12:16.064 Number of Power States: 1 00:12:16.064 Current Power State: Power State #0 00:12:16.064 Power State #0: 00:12:16.064 Max Power: 25.00 W 00:12:16.064 Non-Operational State: Operational 00:12:16.064 Entry Latency: 16 microseconds 00:12:16.064 Exit Latency: 4 microseconds 00:12:16.064 Relative Read Throughput: 0 00:12:16.064 Relative Read Latency: 0 00:12:16.064 Relative Write Throughput: 0 00:12:16.064 Relative Write Latency: 0 00:12:16.322 Idle Power: Not Reported 00:12:16.322 Active Power: Not Reported 00:12:16.322 Non-Operational Permissive Mode: Not Supported 00:12:16.322 00:12:16.322 Health Information 00:12:16.322 ================== 00:12:16.322 Critical Warnings: 00:12:16.322 Available Spare Space: OK 00:12:16.322 Temperature: OK 00:12:16.322 Device Reliability: OK 00:12:16.322 Read Only: No 00:12:16.322 Volatile Memory Backup: OK 00:12:16.322 Current Temperature: 323 Kelvin (50 Celsius) 00:12:16.322 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:16.322 Available Spare: 0% 00:12:16.322 Available Spare Threshold: 0% 00:12:16.322 Life Percentage Used: 0% 00:12:16.322 Data Units Read: 1053 00:12:16.322 Data Units Written: 920 00:12:16.322 Host Read Commands: 46564 00:12:16.322 Host Write Commands: 45356 00:12:16.322 Controller Busy Time: 0 minutes 00:12:16.322 Power Cycles: 0 00:12:16.322 Power On Hours: 0 hours 00:12:16.322 Unsafe Shutdowns: 0 00:12:16.322 Unrecoverable Media Errors: 0 00:12:16.322 Lifetime Error Log Entries: 0 00:12:16.322 Warning Temperature Time: 0 minutes 00:12:16.322 Critical Temperature Time: 0 minutes 00:12:16.322 00:12:16.322 Number of Queues 00:12:16.322 ================ 00:12:16.322 Number of I/O Submission Queues: 64 00:12:16.322 Number of I/O Completion Queues: 64 00:12:16.322 00:12:16.322 ZNS Specific Controller Data 00:12:16.322 ============================ 00:12:16.322 Zone Append Size Limit: 0 00:12:16.322 00:12:16.322 00:12:16.322 Active Namespaces 00:12:16.322 ================= 00:12:16.322 Namespace ID:1 00:12:16.322 Error Recovery Timeout: Unlimited 00:12:16.322 Command Set Identifier: NVM (00h) 00:12:16.322 Deallocate: Supported 00:12:16.322 Deallocated/Unwritten Error: Supported 00:12:16.322 Deallocated Read Value: All 0x00 00:12:16.322 Deallocate in Write Zeroes: Not Supported 00:12:16.322 Deallocated Guard Field: 0xFFFF 00:12:16.323 Flush: Supported 00:12:16.323 Reservation: Not Supported 00:12:16.323 Namespace Sharing Capabilities: Private 00:12:16.323 Size (in LBAs): 1310720 (5GiB) 00:12:16.323 Capacity (in LBAs): 1310720 (5GiB) 00:12:16.323 Utilization (in LBAs): 1310720 (5GiB) 00:12:16.323 Thin Provisioning: Not Supported 00:12:16.323 Per-NS Atomic Units: No 00:12:16.323 Maximum Single Source Range Length: 128 00:12:16.323 Maximum Copy Length: 128 00:12:16.323 Maximum Source Range Count: 128 00:12:16.323 NGUID/EUI64 Never Reused: No 00:12:16.323 Namespace Write Protected: No 00:12:16.323 Number of LBA Formats: 8 00:12:16.323 Current LBA Format: LBA Format #04 00:12:16.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.323 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:16.323 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:16.323 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:16.323 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:16.323 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:16.323 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:16.323 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:16.323 00:12:16.323 NVM Specific Namespace Data 00:12:16.323 =========================== 00:12:16.323 Logical Block Storage Tag Mask: 0 00:12:16.323 Protection Information Capabilities: 00:12:16.323 16b Guard Protection Information Storage Tag Support: No 00:12:16.323 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:16.323 Storage Tag Check Read Support: No 00:12:16.323 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.323 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.323 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.323 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.323 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.323 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.323 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.323 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.323 07:48:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:16.323 07:48:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:12:16.583 ===================================================== 00:12:16.583 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:16.583 ===================================================== 00:12:16.583 Controller Capabilities/Features 00:12:16.583 ================================ 00:12:16.583 Vendor ID: 1b36 00:12:16.583 Subsystem Vendor ID: 1af4 00:12:16.583 Serial Number: 12342 00:12:16.583 Model Number: QEMU NVMe Ctrl 00:12:16.583 Firmware Version: 8.0.0 00:12:16.583 Recommended Arb Burst: 6 00:12:16.583 IEEE OUI Identifier: 00 54 52 00:12:16.583 Multi-path I/O 00:12:16.583 May have multiple subsystem ports: No 00:12:16.583 May have multiple controllers: No 00:12:16.583 Associated with SR-IOV VF: No 00:12:16.583 Max Data Transfer Size: 524288 00:12:16.583 Max Number of Namespaces: 256 00:12:16.583 Max Number of I/O Queues: 64 00:12:16.583 NVMe Specification Version (VS): 1.4 00:12:16.583 NVMe Specification Version (Identify): 1.4 00:12:16.583 Maximum Queue Entries: 2048 00:12:16.583 Contiguous Queues Required: Yes 00:12:16.583 Arbitration Mechanisms Supported 00:12:16.583 Weighted Round Robin: Not Supported 00:12:16.583 Vendor Specific: Not Supported 00:12:16.583 Reset Timeout: 7500 ms 00:12:16.583 Doorbell Stride: 4 bytes 00:12:16.583 NVM Subsystem Reset: Not Supported 00:12:16.583 Command Sets Supported 00:12:16.583 NVM Command Set: Supported 00:12:16.583 Boot Partition: Not Supported 00:12:16.583 Memory Page Size Minimum: 4096 bytes 00:12:16.583 Memory Page Size Maximum: 65536 bytes 00:12:16.583 Persistent Memory Region: Not Supported 00:12:16.583 Optional Asynchronous Events Supported 00:12:16.583 Namespace Attribute Notices: Supported 00:12:16.583 Firmware Activation Notices: Not Supported 00:12:16.583 ANA Change Notices: Not Supported 00:12:16.583 PLE Aggregate Log Change Notices: Not Supported 00:12:16.583 LBA Status Info Alert Notices: Not Supported 00:12:16.583 EGE Aggregate Log Change Notices: Not Supported 00:12:16.583 Normal NVM Subsystem Shutdown event: Not Supported 00:12:16.583 Zone Descriptor Change Notices: Not Supported 00:12:16.583 Discovery Log Change Notices: Not Supported 00:12:16.583 Controller Attributes 00:12:16.583 128-bit Host Identifier: Not Supported 00:12:16.583 Non-Operational Permissive Mode: Not Supported 00:12:16.583 NVM Sets: Not Supported 00:12:16.583 Read Recovery Levels: Not Supported 00:12:16.583 Endurance Groups: Not Supported 00:12:16.583 Predictable Latency Mode: Not Supported 00:12:16.583 Traffic Based Keep ALive: Not Supported 00:12:16.583 Namespace Granularity: Not Supported 00:12:16.583 SQ Associations: Not Supported 00:12:16.583 UUID List: Not Supported 00:12:16.583 Multi-Domain Subsystem: Not Supported 00:12:16.583 Fixed Capacity Management: Not Supported 00:12:16.583 Variable Capacity Management: Not Supported 00:12:16.583 Delete Endurance Group: Not Supported 00:12:16.583 Delete NVM Set: Not Supported 00:12:16.583 Extended LBA Formats Supported: Supported 00:12:16.583 Flexible Data Placement Supported: Not Supported 00:12:16.583 00:12:16.583 Controller Memory Buffer Support 00:12:16.583 ================================ 00:12:16.583 Supported: No 00:12:16.583 00:12:16.583 Persistent Memory Region Support 00:12:16.583 ================================ 00:12:16.583 Supported: No 00:12:16.583 00:12:16.583 Admin Command Set Attributes 00:12:16.583 ============================ 00:12:16.583 Security Send/Receive: Not Supported 00:12:16.583 Format NVM: Supported 00:12:16.583 Firmware Activate/Download: Not Supported 00:12:16.583 Namespace Management: Supported 00:12:16.583 Device Self-Test: Not Supported 00:12:16.583 Directives: Supported 00:12:16.583 NVMe-MI: Not Supported 00:12:16.583 Virtualization Management: Not Supported 00:12:16.583 Doorbell Buffer Config: Supported 00:12:16.583 Get LBA Status Capability: Not Supported 00:12:16.583 Command & Feature Lockdown Capability: Not Supported 00:12:16.584 Abort Command Limit: 4 00:12:16.584 Async Event Request Limit: 4 00:12:16.584 Number of Firmware Slots: N/A 00:12:16.584 Firmware Slot 1 Read-Only: N/A 00:12:16.584 Firmware Activation Without Reset: N/A 00:12:16.584 Multiple Update Detection Support: N/A 00:12:16.584 Firmware Update Granularity: No Information Provided 00:12:16.584 Per-Namespace SMART Log: Yes 00:12:16.584 Asymmetric Namespace Access Log Page: Not Supported 00:12:16.584 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:12:16.584 Command Effects Log Page: Supported 00:12:16.584 Get Log Page Extended Data: Supported 00:12:16.584 Telemetry Log Pages: Not Supported 00:12:16.584 Persistent Event Log Pages: Not Supported 00:12:16.584 Supported Log Pages Log Page: May Support 00:12:16.584 Commands Supported & Effects Log Page: Not Supported 00:12:16.584 Feature Identifiers & Effects Log Page:May Support 00:12:16.584 NVMe-MI Commands & Effects Log Page: May Support 00:12:16.584 Data Area 4 for Telemetry Log: Not Supported 00:12:16.584 Error Log Page Entries Supported: 1 00:12:16.584 Keep Alive: Not Supported 00:12:16.584 00:12:16.584 NVM Command Set Attributes 00:12:16.584 ========================== 00:12:16.584 Submission Queue Entry Size 00:12:16.584 Max: 64 00:12:16.584 Min: 64 00:12:16.584 Completion Queue Entry Size 00:12:16.584 Max: 16 00:12:16.584 Min: 16 00:12:16.584 Number of Namespaces: 256 00:12:16.584 Compare Command: Supported 00:12:16.584 Write Uncorrectable Command: Not Supported 00:12:16.584 Dataset Management Command: Supported 00:12:16.584 Write Zeroes Command: Supported 00:12:16.584 Set Features Save Field: Supported 00:12:16.584 Reservations: Not Supported 00:12:16.584 Timestamp: Supported 00:12:16.584 Copy: Supported 00:12:16.584 Volatile Write Cache: Present 00:12:16.584 Atomic Write Unit (Normal): 1 00:12:16.584 Atomic Write Unit (PFail): 1 00:12:16.584 Atomic Compare & Write Unit: 1 00:12:16.584 Fused Compare & Write: Not Supported 00:12:16.584 Scatter-Gather List 00:12:16.584 SGL Command Set: Supported 00:12:16.584 SGL Keyed: Not Supported 00:12:16.584 SGL Bit Bucket Descriptor: Not Supported 00:12:16.584 SGL Metadata Pointer: Not Supported 00:12:16.584 Oversized SGL: Not Supported 00:12:16.584 SGL Metadata Address: Not Supported 00:12:16.584 SGL Offset: Not Supported 00:12:16.584 Transport SGL Data Block: Not Supported 00:12:16.584 Replay Protected Memory Block: Not Supported 00:12:16.584 00:12:16.584 Firmware Slot Information 00:12:16.584 ========================= 00:12:16.584 Active slot: 1 00:12:16.584 Slot 1 Firmware Revision: 1.0 00:12:16.584 00:12:16.584 00:12:16.584 Commands Supported and Effects 00:12:16.584 ============================== 00:12:16.584 Admin Commands 00:12:16.584 -------------- 00:12:16.584 Delete I/O Submission Queue (00h): Supported 00:12:16.584 Create I/O Submission Queue (01h): Supported 00:12:16.584 Get Log Page (02h): Supported 00:12:16.584 Delete I/O Completion Queue (04h): Supported 00:12:16.584 Create I/O Completion Queue (05h): Supported 00:12:16.584 Identify (06h): Supported 00:12:16.584 Abort (08h): Supported 00:12:16.584 Set Features (09h): Supported 00:12:16.584 Get Features (0Ah): Supported 00:12:16.584 Asynchronous Event Request (0Ch): Supported 00:12:16.584 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:16.584 Directive Send (19h): Supported 00:12:16.584 Directive Receive (1Ah): Supported 00:12:16.584 Virtualization Management (1Ch): Supported 00:12:16.584 Doorbell Buffer Config (7Ch): Supported 00:12:16.584 Format NVM (80h): Supported LBA-Change 00:12:16.584 I/O Commands 00:12:16.584 ------------ 00:12:16.584 Flush (00h): Supported LBA-Change 00:12:16.584 Write (01h): Supported LBA-Change 00:12:16.584 Read (02h): Supported 00:12:16.584 Compare (05h): Supported 00:12:16.584 Write Zeroes (08h): Supported LBA-Change 00:12:16.584 Dataset Management (09h): Supported LBA-Change 00:12:16.584 Unknown (0Ch): Supported 00:12:16.584 Unknown (12h): Supported 00:12:16.584 Copy (19h): Supported LBA-Change 00:12:16.584 Unknown (1Dh): Supported LBA-Change 00:12:16.584 00:12:16.584 Error Log 00:12:16.584 ========= 00:12:16.584 00:12:16.584 Arbitration 00:12:16.584 =========== 00:12:16.584 Arbitration Burst: no limit 00:12:16.584 00:12:16.584 Power Management 00:12:16.584 ================ 00:12:16.584 Number of Power States: 1 00:12:16.584 Current Power State: Power State #0 00:12:16.584 Power State #0: 00:12:16.584 Max Power: 25.00 W 00:12:16.584 Non-Operational State: Operational 00:12:16.584 Entry Latency: 16 microseconds 00:12:16.584 Exit Latency: 4 microseconds 00:12:16.584 Relative Read Throughput: 0 00:12:16.584 Relative Read Latency: 0 00:12:16.584 Relative Write Throughput: 0 00:12:16.584 Relative Write Latency: 0 00:12:16.584 Idle Power: Not Reported 00:12:16.584 Active Power: Not Reported 00:12:16.584 Non-Operational Permissive Mode: Not Supported 00:12:16.584 00:12:16.584 Health Information 00:12:16.584 ================== 00:12:16.584 Critical Warnings: 00:12:16.584 Available Spare Space: OK 00:12:16.584 Temperature: OK 00:12:16.584 Device Reliability: OK 00:12:16.584 Read Only: No 00:12:16.584 Volatile Memory Backup: OK 00:12:16.584 Current Temperature: 323 Kelvin (50 Celsius) 00:12:16.584 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:16.584 Available Spare: 0% 00:12:16.584 Available Spare Threshold: 0% 00:12:16.584 Life Percentage Used: 0% 00:12:16.584 Data Units Read: 2136 00:12:16.584 Data Units Written: 1924 00:12:16.584 Host Read Commands: 96851 00:12:16.584 Host Write Commands: 95120 00:12:16.584 Controller Busy Time: 0 minutes 00:12:16.584 Power Cycles: 0 00:12:16.584 Power On Hours: 0 hours 00:12:16.584 Unsafe Shutdowns: 0 00:12:16.584 Unrecoverable Media Errors: 0 00:12:16.584 Lifetime Error Log Entries: 0 00:12:16.584 Warning Temperature Time: 0 minutes 00:12:16.584 Critical Temperature Time: 0 minutes 00:12:16.584 00:12:16.584 Number of Queues 00:12:16.584 ================ 00:12:16.584 Number of I/O Submission Queues: 64 00:12:16.584 Number of I/O Completion Queues: 64 00:12:16.584 00:12:16.584 ZNS Specific Controller Data 00:12:16.584 ============================ 00:12:16.584 Zone Append Size Limit: 0 00:12:16.584 00:12:16.584 00:12:16.584 Active Namespaces 00:12:16.584 ================= 00:12:16.584 Namespace ID:1 00:12:16.584 Error Recovery Timeout: Unlimited 00:12:16.584 Command Set Identifier: NVM (00h) 00:12:16.584 Deallocate: Supported 00:12:16.584 Deallocated/Unwritten Error: Supported 00:12:16.584 Deallocated Read Value: All 0x00 00:12:16.584 Deallocate in Write Zeroes: Not Supported 00:12:16.584 Deallocated Guard Field: 0xFFFF 00:12:16.584 Flush: Supported 00:12:16.584 Reservation: Not Supported 00:12:16.584 Namespace Sharing Capabilities: Private 00:12:16.584 Size (in LBAs): 1048576 (4GiB) 00:12:16.584 Capacity (in LBAs): 1048576 (4GiB) 00:12:16.584 Utilization (in LBAs): 1048576 (4GiB) 00:12:16.584 Thin Provisioning: Not Supported 00:12:16.584 Per-NS Atomic Units: No 00:12:16.584 Maximum Single Source Range Length: 128 00:12:16.584 Maximum Copy Length: 128 00:12:16.584 Maximum Source Range Count: 128 00:12:16.584 NGUID/EUI64 Never Reused: No 00:12:16.584 Namespace Write Protected: No 00:12:16.584 Number of LBA Formats: 8 00:12:16.584 Current LBA Format: LBA Format #04 00:12:16.584 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.584 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:16.584 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:16.584 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:16.584 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:16.584 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:16.584 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:16.584 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:16.584 00:12:16.584 NVM Specific Namespace Data 00:12:16.584 =========================== 00:12:16.584 Logical Block Storage Tag Mask: 0 00:12:16.584 Protection Information Capabilities: 00:12:16.584 16b Guard Protection Information Storage Tag Support: No 00:12:16.584 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:16.584 Storage Tag Check Read Support: No 00:12:16.584 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Namespace ID:2 00:12:16.585 Error Recovery Timeout: Unlimited 00:12:16.585 Command Set Identifier: NVM (00h) 00:12:16.585 Deallocate: Supported 00:12:16.585 Deallocated/Unwritten Error: Supported 00:12:16.585 Deallocated Read Value: All 0x00 00:12:16.585 Deallocate in Write Zeroes: Not Supported 00:12:16.585 Deallocated Guard Field: 0xFFFF 00:12:16.585 Flush: Supported 00:12:16.585 Reservation: Not Supported 00:12:16.585 Namespace Sharing Capabilities: Private 00:12:16.585 Size (in LBAs): 1048576 (4GiB) 00:12:16.585 Capacity (in LBAs): 1048576 (4GiB) 00:12:16.585 Utilization (in LBAs): 1048576 (4GiB) 00:12:16.585 Thin Provisioning: Not Supported 00:12:16.585 Per-NS Atomic Units: No 00:12:16.585 Maximum Single Source Range Length: 128 00:12:16.585 Maximum Copy Length: 128 00:12:16.585 Maximum Source Range Count: 128 00:12:16.585 NGUID/EUI64 Never Reused: No 00:12:16.585 Namespace Write Protected: No 00:12:16.585 Number of LBA Formats: 8 00:12:16.585 Current LBA Format: LBA Format #04 00:12:16.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:16.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:16.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:16.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:16.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:16.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:16.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:16.585 00:12:16.585 NVM Specific Namespace Data 00:12:16.585 =========================== 00:12:16.585 Logical Block Storage Tag Mask: 0 00:12:16.585 Protection Information Capabilities: 00:12:16.585 16b Guard Protection Information Storage Tag Support: No 00:12:16.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:16.585 Storage Tag Check Read Support: No 00:12:16.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Namespace ID:3 00:12:16.585 Error Recovery Timeout: Unlimited 00:12:16.585 Command Set Identifier: NVM (00h) 00:12:16.585 Deallocate: Supported 00:12:16.585 Deallocated/Unwritten Error: Supported 00:12:16.585 Deallocated Read Value: All 0x00 00:12:16.585 Deallocate in Write Zeroes: Not Supported 00:12:16.585 Deallocated Guard Field: 0xFFFF 00:12:16.585 Flush: Supported 00:12:16.585 Reservation: Not Supported 00:12:16.585 Namespace Sharing Capabilities: Private 00:12:16.585 Size (in LBAs): 1048576 (4GiB) 00:12:16.585 Capacity (in LBAs): 1048576 (4GiB) 00:12:16.585 Utilization (in LBAs): 1048576 (4GiB) 00:12:16.585 Thin Provisioning: Not Supported 00:12:16.585 Per-NS Atomic Units: No 00:12:16.585 Maximum Single Source Range Length: 128 00:12:16.585 Maximum Copy Length: 128 00:12:16.585 Maximum Source Range Count: 128 00:12:16.585 NGUID/EUI64 Never Reused: No 00:12:16.585 Namespace Write Protected: No 00:12:16.585 Number of LBA Formats: 8 00:12:16.585 Current LBA Format: LBA Format #04 00:12:16.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:16.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:16.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:16.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:16.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:16.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:16.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:16.585 00:12:16.585 NVM Specific Namespace Data 00:12:16.585 =========================== 00:12:16.585 Logical Block Storage Tag Mask: 0 00:12:16.585 Protection Information Capabilities: 00:12:16.585 16b Guard Protection Information Storage Tag Support: No 00:12:16.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:16.585 Storage Tag Check Read Support: No 00:12:16.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.585 07:48:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:16.585 07:48:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:12:16.916 ===================================================== 00:12:16.916 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:16.916 ===================================================== 00:12:16.916 Controller Capabilities/Features 00:12:16.916 ================================ 00:12:16.916 Vendor ID: 1b36 00:12:16.916 Subsystem Vendor ID: 1af4 00:12:16.916 Serial Number: 12343 00:12:16.916 Model Number: QEMU NVMe Ctrl 00:12:16.916 Firmware Version: 8.0.0 00:12:16.916 Recommended Arb Burst: 6 00:12:16.916 IEEE OUI Identifier: 00 54 52 00:12:16.916 Multi-path I/O 00:12:16.916 May have multiple subsystem ports: No 00:12:16.916 May have multiple controllers: Yes 00:12:16.916 Associated with SR-IOV VF: No 00:12:16.916 Max Data Transfer Size: 524288 00:12:16.916 Max Number of Namespaces: 256 00:12:16.916 Max Number of I/O Queues: 64 00:12:16.916 NVMe Specification Version (VS): 1.4 00:12:16.916 NVMe Specification Version (Identify): 1.4 00:12:16.916 Maximum Queue Entries: 2048 00:12:16.916 Contiguous Queues Required: Yes 00:12:16.916 Arbitration Mechanisms Supported 00:12:16.916 Weighted Round Robin: Not Supported 00:12:16.916 Vendor Specific: Not Supported 00:12:16.916 Reset Timeout: 7500 ms 00:12:16.916 Doorbell Stride: 4 bytes 00:12:16.916 NVM Subsystem Reset: Not Supported 00:12:16.916 Command Sets Supported 00:12:16.916 NVM Command Set: Supported 00:12:16.916 Boot Partition: Not Supported 00:12:16.916 Memory Page Size Minimum: 4096 bytes 00:12:16.916 Memory Page Size Maximum: 65536 bytes 00:12:16.916 Persistent Memory Region: Not Supported 00:12:16.916 Optional Asynchronous Events Supported 00:12:16.916 Namespace Attribute Notices: Supported 00:12:16.916 Firmware Activation Notices: Not Supported 00:12:16.916 ANA Change Notices: Not Supported 00:12:16.916 PLE Aggregate Log Change Notices: Not Supported 00:12:16.916 LBA Status Info Alert Notices: Not Supported 00:12:16.916 EGE Aggregate Log Change Notices: Not Supported 00:12:16.916 Normal NVM Subsystem Shutdown event: Not Supported 00:12:16.916 Zone Descriptor Change Notices: Not Supported 00:12:16.916 Discovery Log Change Notices: Not Supported 00:12:16.916 Controller Attributes 00:12:16.916 128-bit Host Identifier: Not Supported 00:12:16.916 Non-Operational Permissive Mode: Not Supported 00:12:16.916 NVM Sets: Not Supported 00:12:16.916 Read Recovery Levels: Not Supported 00:12:16.916 Endurance Groups: Supported 00:12:16.916 Predictable Latency Mode: Not Supported 00:12:16.916 Traffic Based Keep ALive: Not Supported 00:12:16.916 Namespace Granularity: Not Supported 00:12:16.916 SQ Associations: Not Supported 00:12:16.916 UUID List: Not Supported 00:12:16.916 Multi-Domain Subsystem: Not Supported 00:12:16.916 Fixed Capacity Management: Not Supported 00:12:16.916 Variable Capacity Management: Not Supported 00:12:16.916 Delete Endurance Group: Not Supported 00:12:16.916 Delete NVM Set: Not Supported 00:12:16.916 Extended LBA Formats Supported: Supported 00:12:16.916 Flexible Data Placement Supported: Supported 00:12:16.916 00:12:16.916 Controller Memory Buffer Support 00:12:16.916 ================================ 00:12:16.916 Supported: No 00:12:16.916 00:12:16.916 Persistent Memory Region Support 00:12:16.916 ================================ 00:12:16.916 Supported: No 00:12:16.916 00:12:16.916 Admin Command Set Attributes 00:12:16.916 ============================ 00:12:16.916 Security Send/Receive: Not Supported 00:12:16.916 Format NVM: Supported 00:12:16.916 Firmware Activate/Download: Not Supported 00:12:16.916 Namespace Management: Supported 00:12:16.916 Device Self-Test: Not Supported 00:12:16.916 Directives: Supported 00:12:16.916 NVMe-MI: Not Supported 00:12:16.916 Virtualization Management: Not Supported 00:12:16.916 Doorbell Buffer Config: Supported 00:12:16.916 Get LBA Status Capability: Not Supported 00:12:16.916 Command & Feature Lockdown Capability: Not Supported 00:12:16.916 Abort Command Limit: 4 00:12:16.916 Async Event Request Limit: 4 00:12:16.916 Number of Firmware Slots: N/A 00:12:16.916 Firmware Slot 1 Read-Only: N/A 00:12:16.916 Firmware Activation Without Reset: N/A 00:12:16.916 Multiple Update Detection Support: N/A 00:12:16.916 Firmware Update Granularity: No Information Provided 00:12:16.916 Per-Namespace SMART Log: Yes 00:12:16.916 Asymmetric Namespace Access Log Page: Not Supported 00:12:16.916 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:12:16.916 Command Effects Log Page: Supported 00:12:16.916 Get Log Page Extended Data: Supported 00:12:16.916 Telemetry Log Pages: Not Supported 00:12:16.916 Persistent Event Log Pages: Not Supported 00:12:16.916 Supported Log Pages Log Page: May Support 00:12:16.916 Commands Supported & Effects Log Page: Not Supported 00:12:16.916 Feature Identifiers & Effects Log Page:May Support 00:12:16.916 NVMe-MI Commands & Effects Log Page: May Support 00:12:16.916 Data Area 4 for Telemetry Log: Not Supported 00:12:16.916 Error Log Page Entries Supported: 1 00:12:16.916 Keep Alive: Not Supported 00:12:16.916 00:12:16.916 NVM Command Set Attributes 00:12:16.916 ========================== 00:12:16.916 Submission Queue Entry Size 00:12:16.916 Max: 64 00:12:16.916 Min: 64 00:12:16.916 Completion Queue Entry Size 00:12:16.916 Max: 16 00:12:16.916 Min: 16 00:12:16.916 Number of Namespaces: 256 00:12:16.916 Compare Command: Supported 00:12:16.916 Write Uncorrectable Command: Not Supported 00:12:16.916 Dataset Management Command: Supported 00:12:16.916 Write Zeroes Command: Supported 00:12:16.916 Set Features Save Field: Supported 00:12:16.916 Reservations: Not Supported 00:12:16.916 Timestamp: Supported 00:12:16.916 Copy: Supported 00:12:16.916 Volatile Write Cache: Present 00:12:16.916 Atomic Write Unit (Normal): 1 00:12:16.916 Atomic Write Unit (PFail): 1 00:12:16.916 Atomic Compare & Write Unit: 1 00:12:16.916 Fused Compare & Write: Not Supported 00:12:16.916 Scatter-Gather List 00:12:16.916 SGL Command Set: Supported 00:12:16.916 SGL Keyed: Not Supported 00:12:16.916 SGL Bit Bucket Descriptor: Not Supported 00:12:16.916 SGL Metadata Pointer: Not Supported 00:12:16.916 Oversized SGL: Not Supported 00:12:16.916 SGL Metadata Address: Not Supported 00:12:16.916 SGL Offset: Not Supported 00:12:16.916 Transport SGL Data Block: Not Supported 00:12:16.916 Replay Protected Memory Block: Not Supported 00:12:16.916 00:12:16.916 Firmware Slot Information 00:12:16.916 ========================= 00:12:16.916 Active slot: 1 00:12:16.916 Slot 1 Firmware Revision: 1.0 00:12:16.916 00:12:16.916 00:12:16.916 Commands Supported and Effects 00:12:16.916 ============================== 00:12:16.916 Admin Commands 00:12:16.916 -------------- 00:12:16.916 Delete I/O Submission Queue (00h): Supported 00:12:16.916 Create I/O Submission Queue (01h): Supported 00:12:16.916 Get Log Page (02h): Supported 00:12:16.916 Delete I/O Completion Queue (04h): Supported 00:12:16.916 Create I/O Completion Queue (05h): Supported 00:12:16.916 Identify (06h): Supported 00:12:16.916 Abort (08h): Supported 00:12:16.916 Set Features (09h): Supported 00:12:16.916 Get Features (0Ah): Supported 00:12:16.916 Asynchronous Event Request (0Ch): Supported 00:12:16.917 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:16.917 Directive Send (19h): Supported 00:12:16.917 Directive Receive (1Ah): Supported 00:12:16.917 Virtualization Management (1Ch): Supported 00:12:16.917 Doorbell Buffer Config (7Ch): Supported 00:12:16.917 Format NVM (80h): Supported LBA-Change 00:12:16.917 I/O Commands 00:12:16.917 ------------ 00:12:16.917 Flush (00h): Supported LBA-Change 00:12:16.917 Write (01h): Supported LBA-Change 00:12:16.917 Read (02h): Supported 00:12:16.917 Compare (05h): Supported 00:12:16.917 Write Zeroes (08h): Supported LBA-Change 00:12:16.917 Dataset Management (09h): Supported LBA-Change 00:12:16.917 Unknown (0Ch): Supported 00:12:16.917 Unknown (12h): Supported 00:12:16.917 Copy (19h): Supported LBA-Change 00:12:16.917 Unknown (1Dh): Supported LBA-Change 00:12:16.917 00:12:16.917 Error Log 00:12:16.917 ========= 00:12:16.917 00:12:16.917 Arbitration 00:12:16.917 =========== 00:12:16.917 Arbitration Burst: no limit 00:12:16.917 00:12:16.917 Power Management 00:12:16.917 ================ 00:12:16.917 Number of Power States: 1 00:12:16.917 Current Power State: Power State #0 00:12:16.917 Power State #0: 00:12:16.917 Max Power: 25.00 W 00:12:16.917 Non-Operational State: Operational 00:12:16.917 Entry Latency: 16 microseconds 00:12:16.917 Exit Latency: 4 microseconds 00:12:16.917 Relative Read Throughput: 0 00:12:16.917 Relative Read Latency: 0 00:12:16.917 Relative Write Throughput: 0 00:12:16.917 Relative Write Latency: 0 00:12:16.917 Idle Power: Not Reported 00:12:16.917 Active Power: Not Reported 00:12:16.917 Non-Operational Permissive Mode: Not Supported 00:12:16.917 00:12:16.917 Health Information 00:12:16.917 ================== 00:12:16.917 Critical Warnings: 00:12:16.917 Available Spare Space: OK 00:12:16.917 Temperature: OK 00:12:16.917 Device Reliability: OK 00:12:16.917 Read Only: No 00:12:16.917 Volatile Memory Backup: OK 00:12:16.917 Current Temperature: 323 Kelvin (50 Celsius) 00:12:16.917 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:16.917 Available Spare: 0% 00:12:16.917 Available Spare Threshold: 0% 00:12:16.917 Life Percentage Used: 0% 00:12:16.917 Data Units Read: 783 00:12:16.917 Data Units Written: 712 00:12:16.917 Host Read Commands: 33011 00:12:16.917 Host Write Commands: 32434 00:12:16.917 Controller Busy Time: 0 minutes 00:12:16.917 Power Cycles: 0 00:12:16.917 Power On Hours: 0 hours 00:12:16.917 Unsafe Shutdowns: 0 00:12:16.917 Unrecoverable Media Errors: 0 00:12:16.917 Lifetime Error Log Entries: 0 00:12:16.917 Warning Temperature Time: 0 minutes 00:12:16.917 Critical Temperature Time: 0 minutes 00:12:16.917 00:12:16.917 Number of Queues 00:12:16.917 ================ 00:12:16.917 Number of I/O Submission Queues: 64 00:12:16.917 Number of I/O Completion Queues: 64 00:12:16.917 00:12:16.917 ZNS Specific Controller Data 00:12:16.917 ============================ 00:12:16.917 Zone Append Size Limit: 0 00:12:16.917 00:12:16.917 00:12:16.917 Active Namespaces 00:12:16.917 ================= 00:12:16.917 Namespace ID:1 00:12:16.917 Error Recovery Timeout: Unlimited 00:12:16.917 Command Set Identifier: NVM (00h) 00:12:16.917 Deallocate: Supported 00:12:16.917 Deallocated/Unwritten Error: Supported 00:12:16.917 Deallocated Read Value: All 0x00 00:12:16.917 Deallocate in Write Zeroes: Not Supported 00:12:16.917 Deallocated Guard Field: 0xFFFF 00:12:16.917 Flush: Supported 00:12:16.917 Reservation: Not Supported 00:12:16.917 Namespace Sharing Capabilities: Multiple Controllers 00:12:16.917 Size (in LBAs): 262144 (1GiB) 00:12:16.917 Capacity (in LBAs): 262144 (1GiB) 00:12:16.917 Utilization (in LBAs): 262144 (1GiB) 00:12:16.917 Thin Provisioning: Not Supported 00:12:16.917 Per-NS Atomic Units: No 00:12:16.917 Maximum Single Source Range Length: 128 00:12:16.917 Maximum Copy Length: 128 00:12:16.917 Maximum Source Range Count: 128 00:12:16.917 NGUID/EUI64 Never Reused: No 00:12:16.917 Namespace Write Protected: No 00:12:16.917 Endurance group ID: 1 00:12:16.917 Number of LBA Formats: 8 00:12:16.917 Current LBA Format: LBA Format #04 00:12:16.917 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.917 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:16.917 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:16.917 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:16.917 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:16.917 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:16.917 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:16.917 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:16.917 00:12:16.917 Get Feature FDP: 00:12:16.917 ================ 00:12:16.917 Enabled: Yes 00:12:16.917 FDP configuration index: 0 00:12:16.917 00:12:16.917 FDP configurations log page 00:12:16.917 =========================== 00:12:16.917 Number of FDP configurations: 1 00:12:16.917 Version: 0 00:12:16.917 Size: 112 00:12:16.917 FDP Configuration Descriptor: 0 00:12:16.917 Descriptor Size: 96 00:12:16.917 Reclaim Group Identifier format: 2 00:12:16.917 FDP Volatile Write Cache: Not Present 00:12:16.917 FDP Configuration: Valid 00:12:16.917 Vendor Specific Size: 0 00:12:16.917 Number of Reclaim Groups: 2 00:12:16.917 Number of Recalim Unit Handles: 8 00:12:16.917 Max Placement Identifiers: 128 00:12:16.917 Number of Namespaces Suppprted: 256 00:12:16.917 Reclaim unit Nominal Size: 6000000 bytes 00:12:16.917 Estimated Reclaim Unit Time Limit: Not Reported 00:12:16.917 RUH Desc #000: RUH Type: Initially Isolated 00:12:16.917 RUH Desc #001: RUH Type: Initially Isolated 00:12:16.917 RUH Desc #002: RUH Type: Initially Isolated 00:12:16.917 RUH Desc #003: RUH Type: Initially Isolated 00:12:16.917 RUH Desc #004: RUH Type: Initially Isolated 00:12:16.917 RUH Desc #005: RUH Type: Initially Isolated 00:12:16.917 RUH Desc #006: RUH Type: Initially Isolated 00:12:16.917 RUH Desc #007: RUH Type: Initially Isolated 00:12:16.917 00:12:16.917 FDP reclaim unit handle usage log page 00:12:16.917 ====================================== 00:12:16.917 Number of Reclaim Unit Handles: 8 00:12:16.917 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:16.917 RUH Usage Desc #001: RUH Attributes: Unused 00:12:16.917 RUH Usage Desc #002: RUH Attributes: Unused 00:12:16.917 RUH Usage Desc #003: RUH Attributes: Unused 00:12:16.917 RUH Usage Desc #004: RUH Attributes: Unused 00:12:16.917 RUH Usage Desc #005: RUH Attributes: Unused 00:12:16.917 RUH Usage Desc #006: RUH Attributes: Unused 00:12:16.917 RUH Usage Desc #007: RUH Attributes: Unused 00:12:16.917 00:12:16.917 FDP statistics log page 00:12:16.917 ======================= 00:12:16.917 Host bytes with metadata written: 441688064 00:12:16.917 Media bytes with metadata written: 441753600 00:12:16.917 Media bytes erased: 0 00:12:16.917 00:12:16.917 FDP events log page 00:12:16.917 =================== 00:12:16.917 Number of FDP events: 0 00:12:16.917 00:12:16.917 NVM Specific Namespace Data 00:12:16.917 =========================== 00:12:16.917 Logical Block Storage Tag Mask: 0 00:12:16.917 Protection Information Capabilities: 00:12:16.917 16b Guard Protection Information Storage Tag Support: No 00:12:16.917 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:12:16.917 Storage Tag Check Read Support: No 00:12:16.917 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.917 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.917 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.917 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.918 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.918 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.918 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.918 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:12:16.918 ************************************ 00:12:16.918 END TEST nvme_identify 00:12:16.918 ************************************ 00:12:16.918 00:12:16.918 real 0m1.866s 00:12:16.918 user 0m0.739s 00:12:16.918 sys 0m0.916s 00:12:16.918 07:48:39 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.918 07:48:39 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:12:16.918 07:48:39 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:16.918 07:48:39 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:16.918 07:48:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.918 07:48:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:16.918 ************************************ 00:12:16.918 START TEST nvme_perf 00:12:16.918 ************************************ 00:12:16.918 07:48:39 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:12:16.918 07:48:39 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:18.301 Initializing NVMe Controllers 00:12:18.301 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:18.301 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:18.301 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:18.301 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:18.301 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:18.301 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:18.301 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:18.301 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:18.301 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:18.301 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:18.301 Initialization complete. Launching workers. 00:12:18.301 ======================================================== 00:12:18.301 Latency(us) 00:12:18.301 Device Information : IOPS MiB/s Average min max 00:12:18.301 PCIE (0000:00:10.0) NSID 1 from core 0: 12079.00 141.55 10618.49 8065.42 48103.13 00:12:18.301 PCIE (0000:00:11.0) NSID 1 from core 0: 12079.00 141.55 10593.12 8192.62 45253.31 00:12:18.301 PCIE (0000:00:13.0) NSID 1 from core 0: 12079.00 141.55 10564.23 8112.45 42818.31 00:12:18.301 PCIE (0000:00:12.0) NSID 1 from core 0: 12079.00 141.55 10534.97 8144.13 39786.64 00:12:18.301 PCIE (0000:00:12.0) NSID 2 from core 0: 12079.00 141.55 10505.74 8203.04 36833.55 00:12:18.301 PCIE (0000:00:12.0) NSID 3 from core 0: 12079.00 141.55 10477.77 8183.53 33753.91 00:12:18.301 ======================================================== 00:12:18.301 Total : 72474.03 849.31 10549.05 8065.42 48103.13 00:12:18.301 00:12:18.301 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:18.301 ================================================================================= 00:12:18.301 1.00000% : 8400.524us 00:12:18.301 10.00000% : 9055.884us 00:12:18.301 25.00000% : 9651.665us 00:12:18.301 50.00000% : 10247.447us 00:12:18.301 75.00000% : 10843.229us 00:12:18.301 90.00000% : 11736.902us 00:12:18.301 95.00000% : 12451.840us 00:12:18.301 98.00000% : 13226.356us 00:12:18.301 99.00000% : 37653.411us 00:12:18.301 99.50000% : 45517.731us 00:12:18.301 99.90000% : 47662.545us 00:12:18.301 99.99000% : 48139.171us 00:12:18.301 99.99900% : 48139.171us 00:12:18.301 99.99990% : 48139.171us 00:12:18.301 99.99999% : 48139.171us 00:12:18.301 00:12:18.301 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:18.301 ================================================================================= 00:12:18.301 1.00000% : 8460.102us 00:12:18.301 10.00000% : 9115.462us 00:12:18.301 25.00000% : 9651.665us 00:12:18.301 50.00000% : 10247.447us 00:12:18.301 75.00000% : 10783.651us 00:12:18.301 90.00000% : 11796.480us 00:12:18.301 95.00000% : 12451.840us 00:12:18.301 98.00000% : 13166.778us 00:12:18.301 99.00000% : 35270.284us 00:12:18.301 99.50000% : 42896.291us 00:12:18.301 99.90000% : 44802.793us 00:12:18.301 99.99000% : 45279.418us 00:12:18.301 99.99900% : 45279.418us 00:12:18.301 99.99990% : 45279.418us 00:12:18.301 99.99999% : 45279.418us 00:12:18.301 00:12:18.301 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:18.301 ================================================================================= 00:12:18.301 1.00000% : 8460.102us 00:12:18.301 10.00000% : 9115.462us 00:12:18.301 25.00000% : 9651.665us 00:12:18.301 50.00000% : 10247.447us 00:12:18.301 75.00000% : 10783.651us 00:12:18.301 90.00000% : 11736.902us 00:12:18.301 95.00000% : 12451.840us 00:12:18.301 98.00000% : 13166.778us 00:12:18.301 99.00000% : 32648.844us 00:12:18.301 99.50000% : 40274.851us 00:12:18.301 99.90000% : 42419.665us 00:12:18.302 99.99000% : 42896.291us 00:12:18.302 99.99900% : 42896.291us 00:12:18.302 99.99990% : 42896.291us 00:12:18.302 99.99999% : 42896.291us 00:12:18.302 00:12:18.302 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:18.302 ================================================================================= 00:12:18.302 1.00000% : 8460.102us 00:12:18.302 10.00000% : 9115.462us 00:12:18.302 25.00000% : 9651.665us 00:12:18.302 50.00000% : 10247.447us 00:12:18.302 75.00000% : 10783.651us 00:12:18.302 90.00000% : 11677.324us 00:12:18.302 95.00000% : 12451.840us 00:12:18.302 98.00000% : 13405.091us 00:12:18.302 99.00000% : 29669.935us 00:12:18.302 99.50000% : 37415.098us 00:12:18.302 99.90000% : 39321.600us 00:12:18.302 99.99000% : 39798.225us 00:12:18.302 99.99900% : 39798.225us 00:12:18.302 99.99990% : 39798.225us 00:12:18.302 99.99999% : 39798.225us 00:12:18.302 00:12:18.302 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:18.302 ================================================================================= 00:12:18.302 1.00000% : 8519.680us 00:12:18.302 10.00000% : 9115.462us 00:12:18.302 25.00000% : 9651.665us 00:12:18.302 50.00000% : 10247.447us 00:12:18.302 75.00000% : 10783.651us 00:12:18.302 90.00000% : 11617.745us 00:12:18.302 95.00000% : 12451.840us 00:12:18.302 98.00000% : 13524.247us 00:12:18.302 99.00000% : 26571.869us 00:12:18.302 99.50000% : 34317.033us 00:12:18.302 99.90000% : 36461.847us 00:12:18.302 99.99000% : 36938.473us 00:12:18.302 99.99900% : 36938.473us 00:12:18.302 99.99990% : 36938.473us 00:12:18.302 99.99999% : 36938.473us 00:12:18.302 00:12:18.302 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:18.302 ================================================================================= 00:12:18.302 1.00000% : 8519.680us 00:12:18.302 10.00000% : 9115.462us 00:12:18.302 25.00000% : 9651.665us 00:12:18.302 50.00000% : 10247.447us 00:12:18.302 75.00000% : 10843.229us 00:12:18.302 90.00000% : 11677.324us 00:12:18.302 95.00000% : 12451.840us 00:12:18.302 98.00000% : 13524.247us 00:12:18.302 99.00000% : 23831.273us 00:12:18.302 99.50000% : 31457.280us 00:12:18.302 99.90000% : 33363.782us 00:12:18.302 99.99000% : 33840.407us 00:12:18.302 99.99900% : 33840.407us 00:12:18.302 99.99990% : 33840.407us 00:12:18.302 99.99999% : 33840.407us 00:12:18.302 00:12:18.302 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:18.302 ============================================================================== 00:12:18.302 Range in us Cumulative IO count 00:12:18.302 8043.055 - 8102.633: 0.0579% ( 7) 00:12:18.302 8102.633 - 8162.211: 0.1240% ( 8) 00:12:18.302 8162.211 - 8221.789: 0.2811% ( 19) 00:12:18.302 8221.789 - 8281.367: 0.5787% ( 36) 00:12:18.302 8281.367 - 8340.945: 0.9177% ( 41) 00:12:18.302 8340.945 - 8400.524: 1.3062% ( 47) 00:12:18.302 8400.524 - 8460.102: 1.8519% ( 66) 00:12:18.302 8460.102 - 8519.680: 2.4719% ( 75) 00:12:18.302 8519.680 - 8579.258: 3.0837% ( 74) 00:12:18.302 8579.258 - 8638.836: 3.9104% ( 100) 00:12:18.302 8638.836 - 8698.415: 4.6379% ( 88) 00:12:18.302 8698.415 - 8757.993: 5.5142% ( 106) 00:12:18.302 8757.993 - 8817.571: 6.3079% ( 96) 00:12:18.302 8817.571 - 8877.149: 7.2751% ( 117) 00:12:18.302 8877.149 - 8936.727: 8.2341% ( 116) 00:12:18.302 8936.727 - 8996.305: 9.2675% ( 125) 00:12:18.302 8996.305 - 9055.884: 10.3505% ( 131) 00:12:18.302 9055.884 - 9115.462: 11.4749% ( 136) 00:12:18.302 9115.462 - 9175.040: 12.5248% ( 127) 00:12:18.302 9175.040 - 9234.618: 13.7318% ( 146) 00:12:18.302 9234.618 - 9294.196: 15.1290% ( 169) 00:12:18.302 9294.196 - 9353.775: 16.5509% ( 172) 00:12:18.302 9353.775 - 9413.353: 18.3366% ( 216) 00:12:18.302 9413.353 - 9472.931: 20.2712% ( 234) 00:12:18.302 9472.931 - 9532.509: 22.1974% ( 233) 00:12:18.302 9532.509 - 9592.087: 24.2642% ( 250) 00:12:18.302 9592.087 - 9651.665: 26.4881% ( 269) 00:12:18.302 9651.665 - 9711.244: 28.9021% ( 292) 00:12:18.302 9711.244 - 9770.822: 31.2087% ( 279) 00:12:18.302 9770.822 - 9830.400: 33.7798% ( 311) 00:12:18.302 9830.400 - 9889.978: 36.3178% ( 307) 00:12:18.302 9889.978 - 9949.556: 38.9716% ( 321) 00:12:18.302 9949.556 - 10009.135: 41.4435% ( 299) 00:12:18.302 10009.135 - 10068.713: 44.0642% ( 317) 00:12:18.302 10068.713 - 10128.291: 46.7179% ( 321) 00:12:18.302 10128.291 - 10187.869: 49.3386% ( 317) 00:12:18.302 10187.869 - 10247.447: 52.0089% ( 323) 00:12:18.302 10247.447 - 10307.025: 54.5966% ( 313) 00:12:18.302 10307.025 - 10366.604: 57.1677% ( 311) 00:12:18.302 10366.604 - 10426.182: 59.5486% ( 288) 00:12:18.302 10426.182 - 10485.760: 62.0453% ( 302) 00:12:18.302 10485.760 - 10545.338: 64.5503% ( 303) 00:12:18.302 10545.338 - 10604.916: 66.7080% ( 261) 00:12:18.302 10604.916 - 10664.495: 69.0311% ( 281) 00:12:18.302 10664.495 - 10724.073: 71.1475% ( 256) 00:12:18.302 10724.073 - 10783.651: 73.3052% ( 261) 00:12:18.302 10783.651 - 10843.229: 75.2067% ( 230) 00:12:18.302 10843.229 - 10902.807: 77.0007% ( 217) 00:12:18.302 10902.807 - 10962.385: 78.5466% ( 187) 00:12:18.302 10962.385 - 11021.964: 80.1257% ( 191) 00:12:18.302 11021.964 - 11081.542: 81.4815% ( 164) 00:12:18.302 11081.542 - 11141.120: 82.8456% ( 165) 00:12:18.302 11141.120 - 11200.698: 84.0030% ( 140) 00:12:18.302 11200.698 - 11260.276: 85.0116% ( 122) 00:12:18.302 11260.276 - 11319.855: 86.0284% ( 123) 00:12:18.302 11319.855 - 11379.433: 86.7312% ( 85) 00:12:18.302 11379.433 - 11439.011: 87.4256% ( 84) 00:12:18.302 11439.011 - 11498.589: 88.1200% ( 84) 00:12:18.302 11498.589 - 11558.167: 88.7649% ( 78) 00:12:18.302 11558.167 - 11617.745: 89.3436% ( 70) 00:12:18.302 11617.745 - 11677.324: 89.8562% ( 62) 00:12:18.302 11677.324 - 11736.902: 90.3604% ( 61) 00:12:18.302 11736.902 - 11796.480: 90.9226% ( 68) 00:12:18.302 11796.480 - 11856.058: 91.4187% ( 60) 00:12:18.302 11856.058 - 11915.636: 91.7824% ( 44) 00:12:18.302 11915.636 - 11975.215: 92.2454% ( 56) 00:12:18.302 11975.215 - 12034.793: 92.6505% ( 49) 00:12:18.302 12034.793 - 12094.371: 92.9729% ( 39) 00:12:18.302 12094.371 - 12153.949: 93.3366% ( 44) 00:12:18.302 12153.949 - 12213.527: 93.7087% ( 45) 00:12:18.302 12213.527 - 12273.105: 94.1468% ( 53) 00:12:18.302 12273.105 - 12332.684: 94.4775% ( 40) 00:12:18.302 12332.684 - 12392.262: 94.8165% ( 41) 00:12:18.302 12392.262 - 12451.840: 95.1389% ( 39) 00:12:18.302 12451.840 - 12511.418: 95.4034% ( 32) 00:12:18.302 12511.418 - 12570.996: 95.7093% ( 37) 00:12:18.302 12570.996 - 12630.575: 95.9408% ( 28) 00:12:18.302 12630.575 - 12690.153: 96.2136% ( 33) 00:12:18.302 12690.153 - 12749.731: 96.4368% ( 27) 00:12:18.302 12749.731 - 12809.309: 96.7014% ( 32) 00:12:18.302 12809.309 - 12868.887: 96.9742% ( 33) 00:12:18.302 12868.887 - 12928.465: 97.1974% ( 27) 00:12:18.302 12928.465 - 12988.044: 97.4372% ( 29) 00:12:18.302 12988.044 - 13047.622: 97.6521% ( 26) 00:12:18.302 13047.622 - 13107.200: 97.7679% ( 14) 00:12:18.302 13107.200 - 13166.778: 97.9084% ( 17) 00:12:18.302 13166.778 - 13226.356: 98.0076% ( 12) 00:12:18.302 13226.356 - 13285.935: 98.1481% ( 17) 00:12:18.302 13285.935 - 13345.513: 98.2308% ( 10) 00:12:18.302 13345.513 - 13405.091: 98.2970% ( 8) 00:12:18.302 13405.091 - 13464.669: 98.3548% ( 7) 00:12:18.302 13464.669 - 13524.247: 98.4127% ( 7) 00:12:18.302 13524.247 - 13583.825: 98.4871% ( 9) 00:12:18.302 13583.825 - 13643.404: 98.5532% ( 8) 00:12:18.302 13643.404 - 13702.982: 98.6111% ( 7) 00:12:18.302 13702.982 - 13762.560: 98.6442% ( 4) 00:12:18.302 13762.560 - 13822.138: 98.6772% ( 4) 00:12:18.302 13822.138 - 13881.716: 98.7186% ( 5) 00:12:18.302 13881.716 - 13941.295: 98.7351% ( 2) 00:12:18.302 13941.295 - 14000.873: 98.7847% ( 6) 00:12:18.302 14000.873 - 14060.451: 98.8095% ( 3) 00:12:18.302 14060.451 - 14120.029: 98.8343% ( 3) 00:12:18.302 14120.029 - 14179.607: 98.8674% ( 4) 00:12:18.302 14179.607 - 14239.185: 98.8839% ( 2) 00:12:18.302 14239.185 - 14298.764: 98.9087% ( 3) 00:12:18.302 14298.764 - 14358.342: 98.9253% ( 2) 00:12:18.302 14358.342 - 14417.920: 98.9418% ( 2) 00:12:18.302 37176.785 - 37415.098: 98.9666% ( 3) 00:12:18.302 37415.098 - 37653.411: 99.0162% ( 6) 00:12:18.302 37653.411 - 37891.724: 99.0658% ( 6) 00:12:18.302 37891.724 - 38130.036: 99.1071% ( 5) 00:12:18.302 38130.036 - 38368.349: 99.1567% ( 6) 00:12:18.302 38368.349 - 38606.662: 99.2146% ( 7) 00:12:18.302 38606.662 - 38844.975: 99.2642% ( 6) 00:12:18.302 38844.975 - 39083.287: 99.3056% ( 5) 00:12:18.302 39083.287 - 39321.600: 99.3469% ( 5) 00:12:18.302 39321.600 - 39559.913: 99.3965% ( 6) 00:12:18.302 39559.913 - 39798.225: 99.4461% ( 6) 00:12:18.302 39798.225 - 40036.538: 99.4709% ( 3) 00:12:18.302 45279.418 - 45517.731: 99.5205% ( 6) 00:12:18.302 45517.731 - 45756.044: 99.5618% ( 5) 00:12:18.302 45756.044 - 45994.356: 99.6032% ( 5) 00:12:18.302 45994.356 - 46232.669: 99.6362% ( 4) 00:12:18.302 46232.669 - 46470.982: 99.6941% ( 7) 00:12:18.302 46470.982 - 46709.295: 99.7354% ( 5) 00:12:18.302 46709.295 - 46947.607: 99.7851% ( 6) 00:12:18.302 46947.607 - 47185.920: 99.8347% ( 6) 00:12:18.302 47185.920 - 47424.233: 99.8760% ( 5) 00:12:18.302 47424.233 - 47662.545: 99.9173% ( 5) 00:12:18.302 47662.545 - 47900.858: 99.9669% ( 6) 00:12:18.302 47900.858 - 48139.171: 100.0000% ( 4) 00:12:18.302 00:12:18.302 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:18.302 ============================================================================== 00:12:18.302 Range in us Cumulative IO count 00:12:18.302 8162.211 - 8221.789: 0.0331% ( 4) 00:12:18.302 8221.789 - 8281.367: 0.1405% ( 13) 00:12:18.302 8281.367 - 8340.945: 0.3390% ( 24) 00:12:18.302 8340.945 - 8400.524: 0.6366% ( 36) 00:12:18.302 8400.524 - 8460.102: 1.0665% ( 52) 00:12:18.302 8460.102 - 8519.680: 1.6121% ( 66) 00:12:18.303 8519.680 - 8579.258: 2.1495% ( 65) 00:12:18.303 8579.258 - 8638.836: 2.8853% ( 89) 00:12:18.303 8638.836 - 8698.415: 3.6872% ( 97) 00:12:18.303 8698.415 - 8757.993: 4.5304% ( 102) 00:12:18.303 8757.993 - 8817.571: 5.4563% ( 112) 00:12:18.303 8817.571 - 8877.149: 6.3988% ( 114) 00:12:18.303 8877.149 - 8936.727: 7.4239% ( 124) 00:12:18.303 8936.727 - 8996.305: 8.5648% ( 138) 00:12:18.303 8996.305 - 9055.884: 9.7388% ( 142) 00:12:18.303 9055.884 - 9115.462: 10.9623% ( 148) 00:12:18.303 9115.462 - 9175.040: 12.2354% ( 154) 00:12:18.303 9175.040 - 9234.618: 13.5913% ( 164) 00:12:18.303 9234.618 - 9294.196: 15.0298% ( 174) 00:12:18.303 9294.196 - 9353.775: 16.4187% ( 168) 00:12:18.303 9353.775 - 9413.353: 18.0060% ( 192) 00:12:18.303 9413.353 - 9472.931: 19.6925% ( 204) 00:12:18.303 9472.931 - 9532.509: 21.3955% ( 206) 00:12:18.303 9532.509 - 9592.087: 23.3714% ( 239) 00:12:18.303 9592.087 - 9651.665: 25.5622% ( 265) 00:12:18.303 9651.665 - 9711.244: 27.8108% ( 272) 00:12:18.303 9711.244 - 9770.822: 30.2497% ( 295) 00:12:18.303 9770.822 - 9830.400: 32.6637% ( 292) 00:12:18.303 9830.400 - 9889.978: 35.1769% ( 304) 00:12:18.303 9889.978 - 9949.556: 37.7397% ( 310) 00:12:18.303 9949.556 - 10009.135: 40.4597% ( 329) 00:12:18.303 10009.135 - 10068.713: 43.2622% ( 339) 00:12:18.303 10068.713 - 10128.291: 46.1806% ( 353) 00:12:18.303 10128.291 - 10187.869: 48.9583% ( 336) 00:12:18.303 10187.869 - 10247.447: 51.8436% ( 349) 00:12:18.303 10247.447 - 10307.025: 54.8446% ( 363) 00:12:18.303 10307.025 - 10366.604: 57.7546% ( 352) 00:12:18.303 10366.604 - 10426.182: 60.5407% ( 337) 00:12:18.303 10426.182 - 10485.760: 63.3185% ( 336) 00:12:18.303 10485.760 - 10545.338: 65.9557% ( 319) 00:12:18.303 10545.338 - 10604.916: 68.4110% ( 297) 00:12:18.303 10604.916 - 10664.495: 70.8168% ( 291) 00:12:18.303 10664.495 - 10724.073: 73.0655% ( 272) 00:12:18.303 10724.073 - 10783.651: 75.1488% ( 252) 00:12:18.303 10783.651 - 10843.229: 76.9593% ( 219) 00:12:18.303 10843.229 - 10902.807: 78.6376% ( 203) 00:12:18.303 10902.807 - 10962.385: 80.1091% ( 178) 00:12:18.303 10962.385 - 11021.964: 81.4897% ( 167) 00:12:18.303 11021.964 - 11081.542: 82.6554% ( 141) 00:12:18.303 11081.542 - 11141.120: 83.7467% ( 132) 00:12:18.303 11141.120 - 11200.698: 84.7057% ( 116) 00:12:18.303 11200.698 - 11260.276: 85.5985% ( 108) 00:12:18.303 11260.276 - 11319.855: 86.3178% ( 87) 00:12:18.303 11319.855 - 11379.433: 87.0288% ( 86) 00:12:18.303 11379.433 - 11439.011: 87.6323% ( 73) 00:12:18.303 11439.011 - 11498.589: 88.1614% ( 64) 00:12:18.303 11498.589 - 11558.167: 88.6739% ( 62) 00:12:18.303 11558.167 - 11617.745: 89.0956% ( 51) 00:12:18.303 11617.745 - 11677.324: 89.5089% ( 50) 00:12:18.303 11677.324 - 11736.902: 89.8810% ( 45) 00:12:18.303 11736.902 - 11796.480: 90.3108% ( 52) 00:12:18.303 11796.480 - 11856.058: 90.8069% ( 60) 00:12:18.303 11856.058 - 11915.636: 91.2781% ( 57) 00:12:18.303 11915.636 - 11975.215: 91.7411% ( 56) 00:12:18.303 11975.215 - 12034.793: 92.2454% ( 61) 00:12:18.303 12034.793 - 12094.371: 92.7249% ( 58) 00:12:18.303 12094.371 - 12153.949: 93.2209% ( 60) 00:12:18.303 12153.949 - 12213.527: 93.6425% ( 51) 00:12:18.303 12213.527 - 12273.105: 94.0890% ( 54) 00:12:18.303 12273.105 - 12332.684: 94.5354% ( 54) 00:12:18.303 12332.684 - 12392.262: 94.9239% ( 47) 00:12:18.303 12392.262 - 12451.840: 95.2464% ( 39) 00:12:18.303 12451.840 - 12511.418: 95.5688% ( 39) 00:12:18.303 12511.418 - 12570.996: 95.8664% ( 36) 00:12:18.303 12570.996 - 12630.575: 96.1310% ( 32) 00:12:18.303 12630.575 - 12690.153: 96.4038% ( 33) 00:12:18.303 12690.153 - 12749.731: 96.6601% ( 31) 00:12:18.303 12749.731 - 12809.309: 96.9163% ( 31) 00:12:18.303 12809.309 - 12868.887: 97.1892% ( 33) 00:12:18.303 12868.887 - 12928.465: 97.4372% ( 30) 00:12:18.303 12928.465 - 12988.044: 97.6521% ( 26) 00:12:18.303 12988.044 - 13047.622: 97.8175% ( 20) 00:12:18.303 13047.622 - 13107.200: 97.9828% ( 20) 00:12:18.303 13107.200 - 13166.778: 98.1233% ( 17) 00:12:18.303 13166.778 - 13226.356: 98.1978% ( 9) 00:12:18.303 13226.356 - 13285.935: 98.2970% ( 12) 00:12:18.303 13285.935 - 13345.513: 98.3714% ( 9) 00:12:18.303 13345.513 - 13405.091: 98.4292% ( 7) 00:12:18.303 13405.091 - 13464.669: 98.4623% ( 4) 00:12:18.303 13464.669 - 13524.247: 98.5119% ( 6) 00:12:18.303 13524.247 - 13583.825: 98.5532% ( 5) 00:12:18.303 13583.825 - 13643.404: 98.6028% ( 6) 00:12:18.303 13643.404 - 13702.982: 98.6524% ( 6) 00:12:18.303 13702.982 - 13762.560: 98.6772% ( 3) 00:12:18.303 13762.560 - 13822.138: 98.7021% ( 3) 00:12:18.303 13822.138 - 13881.716: 98.7269% ( 3) 00:12:18.303 13881.716 - 13941.295: 98.7434% ( 2) 00:12:18.303 13941.295 - 14000.873: 98.7599% ( 2) 00:12:18.303 14000.873 - 14060.451: 98.7847% ( 3) 00:12:18.303 14060.451 - 14120.029: 98.8095% ( 3) 00:12:18.303 14120.029 - 14179.607: 98.8343% ( 3) 00:12:18.303 14179.607 - 14239.185: 98.8591% ( 3) 00:12:18.303 14239.185 - 14298.764: 98.8839% ( 3) 00:12:18.303 14298.764 - 14358.342: 98.9005% ( 2) 00:12:18.303 14358.342 - 14417.920: 98.9253% ( 3) 00:12:18.303 14417.920 - 14477.498: 98.9418% ( 2) 00:12:18.303 34793.658 - 35031.971: 98.9666% ( 3) 00:12:18.303 35031.971 - 35270.284: 99.0079% ( 5) 00:12:18.303 35270.284 - 35508.596: 99.0575% ( 6) 00:12:18.303 35508.596 - 35746.909: 99.0989% ( 5) 00:12:18.303 35746.909 - 35985.222: 99.1485% ( 6) 00:12:18.303 35985.222 - 36223.535: 99.1981% ( 6) 00:12:18.303 36223.535 - 36461.847: 99.2394% ( 5) 00:12:18.303 36461.847 - 36700.160: 99.2890% ( 6) 00:12:18.303 36700.160 - 36938.473: 99.3304% ( 5) 00:12:18.303 36938.473 - 37176.785: 99.3800% ( 6) 00:12:18.303 37176.785 - 37415.098: 99.4296% ( 6) 00:12:18.303 37415.098 - 37653.411: 99.4709% ( 5) 00:12:18.303 42419.665 - 42657.978: 99.4792% ( 1) 00:12:18.303 42657.978 - 42896.291: 99.5288% ( 6) 00:12:18.303 42896.291 - 43134.604: 99.5784% ( 6) 00:12:18.303 43134.604 - 43372.916: 99.6197% ( 5) 00:12:18.303 43372.916 - 43611.229: 99.6693% ( 6) 00:12:18.303 43611.229 - 43849.542: 99.7189% ( 6) 00:12:18.303 43849.542 - 44087.855: 99.7603% ( 5) 00:12:18.303 44087.855 - 44326.167: 99.8099% ( 6) 00:12:18.303 44326.167 - 44564.480: 99.8512% ( 5) 00:12:18.303 44564.480 - 44802.793: 99.9008% ( 6) 00:12:18.303 44802.793 - 45041.105: 99.9504% ( 6) 00:12:18.303 45041.105 - 45279.418: 100.0000% ( 6) 00:12:18.303 00:12:18.303 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:18.303 ============================================================================== 00:12:18.303 Range in us Cumulative IO count 00:12:18.303 8102.633 - 8162.211: 0.0248% ( 3) 00:12:18.303 8162.211 - 8221.789: 0.0827% ( 7) 00:12:18.303 8221.789 - 8281.367: 0.2397% ( 19) 00:12:18.303 8281.367 - 8340.945: 0.4382% ( 24) 00:12:18.303 8340.945 - 8400.524: 0.7027% ( 32) 00:12:18.303 8400.524 - 8460.102: 1.0665% ( 44) 00:12:18.303 8460.102 - 8519.680: 1.5212% ( 55) 00:12:18.303 8519.680 - 8579.258: 2.0420% ( 63) 00:12:18.303 8579.258 - 8638.836: 2.7034% ( 80) 00:12:18.303 8638.836 - 8698.415: 3.4557% ( 91) 00:12:18.303 8698.415 - 8757.993: 4.2824% ( 100) 00:12:18.303 8757.993 - 8817.571: 5.1670% ( 107) 00:12:18.303 8817.571 - 8877.149: 6.0433% ( 106) 00:12:18.303 8877.149 - 8936.727: 7.0933% ( 127) 00:12:18.303 8936.727 - 8996.305: 8.0522% ( 116) 00:12:18.303 8996.305 - 9055.884: 9.2014% ( 139) 00:12:18.303 9055.884 - 9115.462: 10.4249% ( 148) 00:12:18.303 9115.462 - 9175.040: 11.6402% ( 147) 00:12:18.303 9175.040 - 9234.618: 12.9216% ( 155) 00:12:18.303 9234.618 - 9294.196: 14.3022% ( 167) 00:12:18.303 9294.196 - 9353.775: 15.7407% ( 174) 00:12:18.303 9353.775 - 9413.353: 17.3694% ( 197) 00:12:18.303 9413.353 - 9472.931: 19.1055% ( 210) 00:12:18.303 9472.931 - 9532.509: 21.0896% ( 240) 00:12:18.303 9532.509 - 9592.087: 23.2804% ( 265) 00:12:18.303 9592.087 - 9651.665: 25.6696% ( 289) 00:12:18.303 9651.665 - 9711.244: 28.0258% ( 285) 00:12:18.303 9711.244 - 9770.822: 30.4315% ( 291) 00:12:18.303 9770.822 - 9830.400: 33.0192% ( 313) 00:12:18.303 9830.400 - 9889.978: 35.7308% ( 328) 00:12:18.303 9889.978 - 9949.556: 38.5251% ( 338) 00:12:18.303 9949.556 - 10009.135: 41.2781% ( 333) 00:12:18.303 10009.135 - 10068.713: 44.1055% ( 342) 00:12:18.303 10068.713 - 10128.291: 46.9907% ( 349) 00:12:18.303 10128.291 - 10187.869: 49.8595% ( 347) 00:12:18.303 10187.869 - 10247.447: 52.7282% ( 347) 00:12:18.303 10247.447 - 10307.025: 55.5969% ( 347) 00:12:18.303 10307.025 - 10366.604: 58.4325% ( 343) 00:12:18.303 10366.604 - 10426.182: 61.2765% ( 344) 00:12:18.303 10426.182 - 10485.760: 63.9220% ( 320) 00:12:18.303 10485.760 - 10545.338: 66.5427% ( 317) 00:12:18.303 10545.338 - 10604.916: 69.0890% ( 308) 00:12:18.303 10604.916 - 10664.495: 71.4286% ( 283) 00:12:18.303 10664.495 - 10724.073: 73.5698% ( 259) 00:12:18.303 10724.073 - 10783.651: 75.6118% ( 247) 00:12:18.303 10783.651 - 10843.229: 77.3313% ( 208) 00:12:18.303 10843.229 - 10902.807: 78.7533% ( 172) 00:12:18.303 10902.807 - 10962.385: 80.1257% ( 166) 00:12:18.303 10962.385 - 11021.964: 81.3657% ( 150) 00:12:18.303 11021.964 - 11081.542: 82.4570% ( 132) 00:12:18.303 11081.542 - 11141.120: 83.4573% ( 121) 00:12:18.303 11141.120 - 11200.698: 84.3915% ( 113) 00:12:18.303 11200.698 - 11260.276: 85.2761% ( 107) 00:12:18.303 11260.276 - 11319.855: 86.1028% ( 100) 00:12:18.303 11319.855 - 11379.433: 86.8882% ( 95) 00:12:18.303 11379.433 - 11439.011: 87.5496% ( 80) 00:12:18.303 11439.011 - 11498.589: 88.1862% ( 77) 00:12:18.303 11498.589 - 11558.167: 88.7235% ( 65) 00:12:18.303 11558.167 - 11617.745: 89.2278% ( 61) 00:12:18.303 11617.745 - 11677.324: 89.6743% ( 54) 00:12:18.303 11677.324 - 11736.902: 90.1538% ( 58) 00:12:18.303 11736.902 - 11796.480: 90.5506% ( 48) 00:12:18.303 11796.480 - 11856.058: 90.9144% ( 44) 00:12:18.304 11856.058 - 11915.636: 91.3277% ( 50) 00:12:18.304 11915.636 - 11975.215: 91.7411% ( 50) 00:12:18.304 11975.215 - 12034.793: 92.2040% ( 56) 00:12:18.304 12034.793 - 12094.371: 92.6257% ( 51) 00:12:18.304 12094.371 - 12153.949: 93.0721% ( 54) 00:12:18.304 12153.949 - 12213.527: 93.5185% ( 54) 00:12:18.304 12213.527 - 12273.105: 93.9732% ( 55) 00:12:18.304 12273.105 - 12332.684: 94.4444% ( 57) 00:12:18.304 12332.684 - 12392.262: 94.7999% ( 43) 00:12:18.304 12392.262 - 12451.840: 95.1224% ( 39) 00:12:18.304 12451.840 - 12511.418: 95.4282% ( 37) 00:12:18.304 12511.418 - 12570.996: 95.7176% ( 35) 00:12:18.304 12570.996 - 12630.575: 95.9987% ( 34) 00:12:18.304 12630.575 - 12690.153: 96.2632% ( 32) 00:12:18.304 12690.153 - 12749.731: 96.5360% ( 33) 00:12:18.304 12749.731 - 12809.309: 96.7923% ( 31) 00:12:18.304 12809.309 - 12868.887: 96.9907% ( 24) 00:12:18.304 12868.887 - 12928.465: 97.1809% ( 23) 00:12:18.304 12928.465 - 12988.044: 97.4124% ( 28) 00:12:18.304 12988.044 - 13047.622: 97.6108% ( 24) 00:12:18.304 13047.622 - 13107.200: 97.8175% ( 25) 00:12:18.304 13107.200 - 13166.778: 98.0241% ( 25) 00:12:18.304 13166.778 - 13226.356: 98.1895% ( 20) 00:12:18.304 13226.356 - 13285.935: 98.2887% ( 12) 00:12:18.304 13285.935 - 13345.513: 98.3879% ( 12) 00:12:18.304 13345.513 - 13405.091: 98.4788% ( 11) 00:12:18.304 13405.091 - 13464.669: 98.5450% ( 8) 00:12:18.304 13464.669 - 13524.247: 98.6028% ( 7) 00:12:18.304 13524.247 - 13583.825: 98.6359% ( 4) 00:12:18.304 13583.825 - 13643.404: 98.6855% ( 6) 00:12:18.304 13643.404 - 13702.982: 98.7269% ( 5) 00:12:18.304 13702.982 - 13762.560: 98.7517% ( 3) 00:12:18.304 13762.560 - 13822.138: 98.7765% ( 3) 00:12:18.304 13822.138 - 13881.716: 98.7930% ( 2) 00:12:18.304 13881.716 - 13941.295: 98.8178% ( 3) 00:12:18.304 13941.295 - 14000.873: 98.8426% ( 3) 00:12:18.304 14000.873 - 14060.451: 98.8674% ( 3) 00:12:18.304 14060.451 - 14120.029: 98.8922% ( 3) 00:12:18.304 14120.029 - 14179.607: 98.9170% ( 3) 00:12:18.304 14179.607 - 14239.185: 98.9335% ( 2) 00:12:18.304 14239.185 - 14298.764: 98.9418% ( 1) 00:12:18.304 32172.218 - 32410.531: 98.9831% ( 5) 00:12:18.304 32410.531 - 32648.844: 99.0245% ( 5) 00:12:18.304 32648.844 - 32887.156: 99.0741% ( 6) 00:12:18.304 32887.156 - 33125.469: 99.1071% ( 4) 00:12:18.304 33125.469 - 33363.782: 99.1567% ( 6) 00:12:18.304 33363.782 - 33602.095: 99.2063% ( 6) 00:12:18.304 33602.095 - 33840.407: 99.2560% ( 6) 00:12:18.304 33840.407 - 34078.720: 99.2973% ( 5) 00:12:18.304 34078.720 - 34317.033: 99.3469% ( 6) 00:12:18.304 34317.033 - 34555.345: 99.3882% ( 5) 00:12:18.304 34555.345 - 34793.658: 99.4296% ( 5) 00:12:18.304 34793.658 - 35031.971: 99.4709% ( 5) 00:12:18.304 40036.538 - 40274.851: 99.5122% ( 5) 00:12:18.304 40274.851 - 40513.164: 99.5453% ( 4) 00:12:18.304 40513.164 - 40751.476: 99.5949% ( 6) 00:12:18.304 40751.476 - 40989.789: 99.6362% ( 5) 00:12:18.304 40989.789 - 41228.102: 99.6858% ( 6) 00:12:18.304 41228.102 - 41466.415: 99.7354% ( 6) 00:12:18.304 41466.415 - 41704.727: 99.7851% ( 6) 00:12:18.304 41704.727 - 41943.040: 99.8264% ( 5) 00:12:18.304 41943.040 - 42181.353: 99.8760% ( 6) 00:12:18.304 42181.353 - 42419.665: 99.9256% ( 6) 00:12:18.304 42419.665 - 42657.978: 99.9669% ( 5) 00:12:18.304 42657.978 - 42896.291: 100.0000% ( 4) 00:12:18.304 00:12:18.304 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:18.304 ============================================================================== 00:12:18.304 Range in us Cumulative IO count 00:12:18.304 8102.633 - 8162.211: 0.0165% ( 2) 00:12:18.304 8162.211 - 8221.789: 0.0744% ( 7) 00:12:18.304 8221.789 - 8281.367: 0.2480% ( 21) 00:12:18.304 8281.367 - 8340.945: 0.4299% ( 22) 00:12:18.304 8340.945 - 8400.524: 0.7110% ( 34) 00:12:18.304 8400.524 - 8460.102: 1.0251% ( 38) 00:12:18.304 8460.102 - 8519.680: 1.4220% ( 48) 00:12:18.304 8519.680 - 8579.258: 2.0089% ( 71) 00:12:18.304 8579.258 - 8638.836: 2.6951% ( 83) 00:12:18.304 8638.836 - 8698.415: 3.3978% ( 85) 00:12:18.304 8698.415 - 8757.993: 4.2245% ( 100) 00:12:18.304 8757.993 - 8817.571: 5.0347% ( 98) 00:12:18.304 8817.571 - 8877.149: 5.9276% ( 108) 00:12:18.304 8877.149 - 8936.727: 6.9114% ( 119) 00:12:18.304 8936.727 - 8996.305: 7.9117% ( 121) 00:12:18.304 8996.305 - 9055.884: 8.9947% ( 131) 00:12:18.304 9055.884 - 9115.462: 10.1769% ( 143) 00:12:18.304 9115.462 - 9175.040: 11.3922% ( 147) 00:12:18.304 9175.040 - 9234.618: 12.6984% ( 158) 00:12:18.304 9234.618 - 9294.196: 14.1369% ( 174) 00:12:18.304 9294.196 - 9353.775: 15.5837% ( 175) 00:12:18.304 9353.775 - 9413.353: 17.1710% ( 192) 00:12:18.304 9413.353 - 9472.931: 18.8823% ( 207) 00:12:18.304 9472.931 - 9532.509: 20.9243% ( 247) 00:12:18.304 9532.509 - 9592.087: 23.1151% ( 265) 00:12:18.304 9592.087 - 9651.665: 25.4051% ( 277) 00:12:18.304 9651.665 - 9711.244: 27.8108% ( 291) 00:12:18.304 9711.244 - 9770.822: 30.2001% ( 289) 00:12:18.304 9770.822 - 9830.400: 32.5479% ( 284) 00:12:18.304 9830.400 - 9889.978: 35.1604% ( 316) 00:12:18.304 9889.978 - 9949.556: 37.9464% ( 337) 00:12:18.304 9949.556 - 10009.135: 40.7242% ( 336) 00:12:18.304 10009.135 - 10068.713: 43.7748% ( 369) 00:12:18.304 10068.713 - 10128.291: 46.7593% ( 361) 00:12:18.304 10128.291 - 10187.869: 49.7024% ( 356) 00:12:18.304 10187.869 - 10247.447: 52.7199% ( 365) 00:12:18.304 10247.447 - 10307.025: 55.5308% ( 340) 00:12:18.304 10307.025 - 10366.604: 58.4491% ( 353) 00:12:18.304 10366.604 - 10426.182: 61.3013% ( 345) 00:12:18.304 10426.182 - 10485.760: 63.9964% ( 326) 00:12:18.304 10485.760 - 10545.338: 66.5675% ( 311) 00:12:18.304 10545.338 - 10604.916: 69.0311% ( 298) 00:12:18.304 10604.916 - 10664.495: 71.3376% ( 279) 00:12:18.304 10664.495 - 10724.073: 73.5202% ( 264) 00:12:18.304 10724.073 - 10783.651: 75.5622% ( 247) 00:12:18.304 10783.651 - 10843.229: 77.2569% ( 205) 00:12:18.304 10843.229 - 10902.807: 78.8194% ( 189) 00:12:18.304 10902.807 - 10962.385: 80.2745% ( 176) 00:12:18.304 10962.385 - 11021.964: 81.6138% ( 162) 00:12:18.304 11021.964 - 11081.542: 82.8538% ( 150) 00:12:18.304 11081.542 - 11141.120: 83.9699% ( 135) 00:12:18.304 11141.120 - 11200.698: 84.9206% ( 115) 00:12:18.304 11200.698 - 11260.276: 85.8796% ( 116) 00:12:18.304 11260.276 - 11319.855: 86.6733% ( 96) 00:12:18.304 11319.855 - 11379.433: 87.4421% ( 93) 00:12:18.304 11379.433 - 11439.011: 88.1035% ( 80) 00:12:18.304 11439.011 - 11498.589: 88.7235% ( 75) 00:12:18.304 11498.589 - 11558.167: 89.2444% ( 63) 00:12:18.304 11558.167 - 11617.745: 89.7569% ( 62) 00:12:18.304 11617.745 - 11677.324: 90.2282% ( 57) 00:12:18.304 11677.324 - 11736.902: 90.6663% ( 53) 00:12:18.304 11736.902 - 11796.480: 91.0714% ( 49) 00:12:18.304 11796.480 - 11856.058: 91.4517% ( 46) 00:12:18.304 11856.058 - 11915.636: 91.8320% ( 46) 00:12:18.304 11915.636 - 11975.215: 92.2040% ( 45) 00:12:18.304 11975.215 - 12034.793: 92.6009% ( 48) 00:12:18.304 12034.793 - 12094.371: 93.0060% ( 49) 00:12:18.304 12094.371 - 12153.949: 93.3945% ( 47) 00:12:18.304 12153.949 - 12213.527: 93.7831% ( 47) 00:12:18.304 12213.527 - 12273.105: 94.1220% ( 41) 00:12:18.304 12273.105 - 12332.684: 94.4775% ( 43) 00:12:18.304 12332.684 - 12392.262: 94.7751% ( 36) 00:12:18.304 12392.262 - 12451.840: 95.0976% ( 39) 00:12:18.304 12451.840 - 12511.418: 95.3538% ( 31) 00:12:18.304 12511.418 - 12570.996: 95.5853% ( 28) 00:12:18.304 12570.996 - 12630.575: 95.8333% ( 30) 00:12:18.304 12630.575 - 12690.153: 96.0483% ( 26) 00:12:18.304 12690.153 - 12749.731: 96.2632% ( 26) 00:12:18.304 12749.731 - 12809.309: 96.4451% ( 22) 00:12:18.304 12809.309 - 12868.887: 96.6353% ( 23) 00:12:18.304 12868.887 - 12928.465: 96.8089% ( 21) 00:12:18.304 12928.465 - 12988.044: 96.9907% ( 22) 00:12:18.304 12988.044 - 13047.622: 97.1809% ( 23) 00:12:18.304 13047.622 - 13107.200: 97.3462% ( 20) 00:12:18.304 13107.200 - 13166.778: 97.5281% ( 22) 00:12:18.304 13166.778 - 13226.356: 97.6604% ( 16) 00:12:18.304 13226.356 - 13285.935: 97.8009% ( 17) 00:12:18.304 13285.935 - 13345.513: 97.9249% ( 15) 00:12:18.304 13345.513 - 13405.091: 98.0489% ( 15) 00:12:18.304 13405.091 - 13464.669: 98.1647% ( 14) 00:12:18.304 13464.669 - 13524.247: 98.2556% ( 11) 00:12:18.304 13524.247 - 13583.825: 98.3300% ( 9) 00:12:18.304 13583.825 - 13643.404: 98.4127% ( 10) 00:12:18.304 13643.404 - 13702.982: 98.4788% ( 8) 00:12:18.304 13702.982 - 13762.560: 98.5450% ( 8) 00:12:18.304 13762.560 - 13822.138: 98.6111% ( 8) 00:12:18.304 13822.138 - 13881.716: 98.6772% ( 8) 00:12:18.304 13881.716 - 13941.295: 98.7186% ( 5) 00:12:18.304 13941.295 - 14000.873: 98.7682% ( 6) 00:12:18.304 14000.873 - 14060.451: 98.8095% ( 5) 00:12:18.304 14060.451 - 14120.029: 98.8591% ( 6) 00:12:18.304 14120.029 - 14179.607: 98.8922% ( 4) 00:12:18.304 14179.607 - 14239.185: 98.9253% ( 4) 00:12:18.304 14239.185 - 14298.764: 98.9418% ( 2) 00:12:18.304 29193.309 - 29312.465: 98.9501% ( 1) 00:12:18.304 29312.465 - 29431.622: 98.9749% ( 3) 00:12:18.304 29431.622 - 29550.778: 98.9997% ( 3) 00:12:18.304 29550.778 - 29669.935: 99.0162% ( 2) 00:12:18.304 29669.935 - 29789.091: 99.0410% ( 3) 00:12:18.304 29789.091 - 29908.247: 99.0658% ( 3) 00:12:18.304 29908.247 - 30027.404: 99.0823% ( 2) 00:12:18.304 30027.404 - 30146.560: 99.1071% ( 3) 00:12:18.304 30146.560 - 30265.716: 99.1402% ( 4) 00:12:18.304 30265.716 - 30384.873: 99.1567% ( 2) 00:12:18.304 30384.873 - 30504.029: 99.1815% ( 3) 00:12:18.304 30504.029 - 30742.342: 99.2312% ( 6) 00:12:18.304 30742.342 - 30980.655: 99.2725% ( 5) 00:12:18.304 30980.655 - 31218.967: 99.3221% ( 6) 00:12:18.304 31218.967 - 31457.280: 99.3717% ( 6) 00:12:18.304 31457.280 - 31695.593: 99.4130% ( 5) 00:12:18.304 31695.593 - 31933.905: 99.4626% ( 6) 00:12:18.304 31933.905 - 32172.218: 99.4709% ( 1) 00:12:18.304 36938.473 - 37176.785: 99.4957% ( 3) 00:12:18.305 37176.785 - 37415.098: 99.5288% ( 4) 00:12:18.305 37415.098 - 37653.411: 99.5784% ( 6) 00:12:18.305 37653.411 - 37891.724: 99.6280% ( 6) 00:12:18.305 37891.724 - 38130.036: 99.6776% ( 6) 00:12:18.305 38130.036 - 38368.349: 99.7189% ( 5) 00:12:18.305 38368.349 - 38606.662: 99.7685% ( 6) 00:12:18.305 38606.662 - 38844.975: 99.8099% ( 5) 00:12:18.305 38844.975 - 39083.287: 99.8595% ( 6) 00:12:18.305 39083.287 - 39321.600: 99.9091% ( 6) 00:12:18.305 39321.600 - 39559.913: 99.9504% ( 5) 00:12:18.305 39559.913 - 39798.225: 100.0000% ( 6) 00:12:18.305 00:12:18.305 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:18.305 ============================================================================== 00:12:18.305 Range in us Cumulative IO count 00:12:18.305 8162.211 - 8221.789: 0.0083% ( 1) 00:12:18.305 8221.789 - 8281.367: 0.1653% ( 19) 00:12:18.305 8281.367 - 8340.945: 0.3390% ( 21) 00:12:18.305 8340.945 - 8400.524: 0.6200% ( 34) 00:12:18.305 8400.524 - 8460.102: 0.9259% ( 37) 00:12:18.305 8460.102 - 8519.680: 1.3476% ( 51) 00:12:18.305 8519.680 - 8579.258: 1.8519% ( 61) 00:12:18.305 8579.258 - 8638.836: 2.5380% ( 83) 00:12:18.305 8638.836 - 8698.415: 3.3234% ( 95) 00:12:18.305 8698.415 - 8757.993: 4.1419% ( 99) 00:12:18.305 8757.993 - 8817.571: 5.1009% ( 116) 00:12:18.305 8817.571 - 8877.149: 6.0681% ( 117) 00:12:18.305 8877.149 - 8936.727: 7.0602% ( 120) 00:12:18.305 8936.727 - 8996.305: 8.1432% ( 131) 00:12:18.305 8996.305 - 9055.884: 9.2841% ( 138) 00:12:18.305 9055.884 - 9115.462: 10.4580% ( 142) 00:12:18.305 9115.462 - 9175.040: 11.6319% ( 142) 00:12:18.305 9175.040 - 9234.618: 12.9712% ( 162) 00:12:18.305 9234.618 - 9294.196: 14.3932% ( 172) 00:12:18.305 9294.196 - 9353.775: 15.8978% ( 182) 00:12:18.305 9353.775 - 9413.353: 17.4190% ( 184) 00:12:18.305 9413.353 - 9472.931: 19.1220% ( 206) 00:12:18.305 9472.931 - 9532.509: 20.8995% ( 215) 00:12:18.305 9532.509 - 9592.087: 23.0241% ( 257) 00:12:18.305 9592.087 - 9651.665: 25.1901% ( 262) 00:12:18.305 9651.665 - 9711.244: 27.5215% ( 282) 00:12:18.305 9711.244 - 9770.822: 29.9934% ( 299) 00:12:18.305 9770.822 - 9830.400: 32.4487% ( 297) 00:12:18.305 9830.400 - 9889.978: 34.9454% ( 302) 00:12:18.305 9889.978 - 9949.556: 37.6819% ( 331) 00:12:18.305 9949.556 - 10009.135: 40.5754% ( 350) 00:12:18.305 10009.135 - 10068.713: 43.5516% ( 360) 00:12:18.305 10068.713 - 10128.291: 46.4616% ( 352) 00:12:18.305 10128.291 - 10187.869: 49.4461% ( 361) 00:12:18.305 10187.869 - 10247.447: 52.3148% ( 347) 00:12:18.305 10247.447 - 10307.025: 55.1753% ( 346) 00:12:18.305 10307.025 - 10366.604: 58.0522% ( 348) 00:12:18.305 10366.604 - 10426.182: 60.7804% ( 330) 00:12:18.305 10426.182 - 10485.760: 63.4755% ( 326) 00:12:18.305 10485.760 - 10545.338: 66.0053% ( 306) 00:12:18.305 10545.338 - 10604.916: 68.4772% ( 299) 00:12:18.305 10604.916 - 10664.495: 70.8333% ( 285) 00:12:18.305 10664.495 - 10724.073: 73.0985% ( 274) 00:12:18.305 10724.073 - 10783.651: 75.1488% ( 248) 00:12:18.305 10783.651 - 10843.229: 77.0585% ( 231) 00:12:18.305 10843.229 - 10902.807: 78.6210% ( 189) 00:12:18.305 10902.807 - 10962.385: 80.1174% ( 181) 00:12:18.305 10962.385 - 11021.964: 81.5972% ( 179) 00:12:18.305 11021.964 - 11081.542: 82.8704% ( 154) 00:12:18.305 11081.542 - 11141.120: 83.9782% ( 134) 00:12:18.305 11141.120 - 11200.698: 84.9868% ( 122) 00:12:18.305 11200.698 - 11260.276: 85.8548% ( 105) 00:12:18.305 11260.276 - 11319.855: 86.6485% ( 96) 00:12:18.305 11319.855 - 11379.433: 87.4339% ( 95) 00:12:18.305 11379.433 - 11439.011: 88.1366% ( 85) 00:12:18.305 11439.011 - 11498.589: 88.8558% ( 87) 00:12:18.305 11498.589 - 11558.167: 89.5007% ( 78) 00:12:18.305 11558.167 - 11617.745: 90.0628% ( 68) 00:12:18.305 11617.745 - 11677.324: 90.6167% ( 67) 00:12:18.305 11677.324 - 11736.902: 91.0466% ( 52) 00:12:18.305 11736.902 - 11796.480: 91.4352% ( 47) 00:12:18.305 11796.480 - 11856.058: 91.8485% ( 50) 00:12:18.305 11856.058 - 11915.636: 92.2371% ( 47) 00:12:18.305 11915.636 - 11975.215: 92.6009% ( 44) 00:12:18.305 11975.215 - 12034.793: 92.9233% ( 39) 00:12:18.305 12034.793 - 12094.371: 93.2292% ( 37) 00:12:18.305 12094.371 - 12153.949: 93.5516% ( 39) 00:12:18.305 12153.949 - 12213.527: 93.8988% ( 42) 00:12:18.305 12213.527 - 12273.105: 94.2295% ( 40) 00:12:18.305 12273.105 - 12332.684: 94.5188% ( 35) 00:12:18.305 12332.684 - 12392.262: 94.7917% ( 33) 00:12:18.305 12392.262 - 12451.840: 95.0479% ( 31) 00:12:18.305 12451.840 - 12511.418: 95.3208% ( 33) 00:12:18.305 12511.418 - 12570.996: 95.5771% ( 31) 00:12:18.305 12570.996 - 12630.575: 95.7920% ( 26) 00:12:18.305 12630.575 - 12690.153: 95.9904% ( 24) 00:12:18.305 12690.153 - 12749.731: 96.1888% ( 24) 00:12:18.305 12749.731 - 12809.309: 96.3624% ( 21) 00:12:18.305 12809.309 - 12868.887: 96.5195% ( 19) 00:12:18.305 12868.887 - 12928.465: 96.6766% ( 19) 00:12:18.305 12928.465 - 12988.044: 96.8585% ( 22) 00:12:18.305 12988.044 - 13047.622: 97.0403% ( 22) 00:12:18.305 13047.622 - 13107.200: 97.1726% ( 16) 00:12:18.305 13107.200 - 13166.778: 97.2966% ( 15) 00:12:18.305 13166.778 - 13226.356: 97.4289% ( 16) 00:12:18.305 13226.356 - 13285.935: 97.5694% ( 17) 00:12:18.305 13285.935 - 13345.513: 97.7100% ( 17) 00:12:18.305 13345.513 - 13405.091: 97.8340% ( 15) 00:12:18.305 13405.091 - 13464.669: 97.9415% ( 13) 00:12:18.305 13464.669 - 13524.247: 98.0489% ( 13) 00:12:18.305 13524.247 - 13583.825: 98.1647% ( 14) 00:12:18.305 13583.825 - 13643.404: 98.2722% ( 13) 00:12:18.305 13643.404 - 13702.982: 98.3548% ( 10) 00:12:18.305 13702.982 - 13762.560: 98.4458% ( 11) 00:12:18.305 13762.560 - 13822.138: 98.5284% ( 10) 00:12:18.305 13822.138 - 13881.716: 98.6359% ( 13) 00:12:18.305 13881.716 - 13941.295: 98.7269% ( 11) 00:12:18.305 13941.295 - 14000.873: 98.7847% ( 7) 00:12:18.305 14000.873 - 14060.451: 98.8178% ( 4) 00:12:18.305 14060.451 - 14120.029: 98.8426% ( 3) 00:12:18.305 14120.029 - 14179.607: 98.8674% ( 3) 00:12:18.305 14179.607 - 14239.185: 98.8922% ( 3) 00:12:18.305 14239.185 - 14298.764: 98.9087% ( 2) 00:12:18.305 14298.764 - 14358.342: 98.9335% ( 3) 00:12:18.305 14358.342 - 14417.920: 98.9418% ( 1) 00:12:18.305 26095.244 - 26214.400: 98.9501% ( 1) 00:12:18.305 26214.400 - 26333.556: 98.9749% ( 3) 00:12:18.305 26333.556 - 26452.713: 98.9914% ( 2) 00:12:18.305 26452.713 - 26571.869: 99.0162% ( 3) 00:12:18.305 26571.869 - 26691.025: 99.0410% ( 3) 00:12:18.305 26691.025 - 26810.182: 99.0658% ( 3) 00:12:18.305 26810.182 - 26929.338: 99.0906% ( 3) 00:12:18.305 26929.338 - 27048.495: 99.1154% ( 3) 00:12:18.305 27048.495 - 27167.651: 99.1402% ( 3) 00:12:18.305 27167.651 - 27286.807: 99.1567% ( 2) 00:12:18.305 27286.807 - 27405.964: 99.1815% ( 3) 00:12:18.305 27405.964 - 27525.120: 99.2063% ( 3) 00:12:18.305 27525.120 - 27644.276: 99.2229% ( 2) 00:12:18.305 27644.276 - 27763.433: 99.2477% ( 3) 00:12:18.305 27763.433 - 27882.589: 99.2725% ( 3) 00:12:18.305 27882.589 - 28001.745: 99.2973% ( 3) 00:12:18.305 28001.745 - 28120.902: 99.3221% ( 3) 00:12:18.305 28120.902 - 28240.058: 99.3469% ( 3) 00:12:18.305 28240.058 - 28359.215: 99.3717% ( 3) 00:12:18.305 28359.215 - 28478.371: 99.3965% ( 3) 00:12:18.305 28478.371 - 28597.527: 99.4130% ( 2) 00:12:18.305 28597.527 - 28716.684: 99.4378% ( 3) 00:12:18.305 28716.684 - 28835.840: 99.4626% ( 3) 00:12:18.305 28835.840 - 28954.996: 99.4709% ( 1) 00:12:18.305 33840.407 - 34078.720: 99.4792% ( 1) 00:12:18.305 34078.720 - 34317.033: 99.5205% ( 5) 00:12:18.305 34317.033 - 34555.345: 99.5701% ( 6) 00:12:18.305 34555.345 - 34793.658: 99.6114% ( 5) 00:12:18.305 34793.658 - 35031.971: 99.6528% ( 5) 00:12:18.305 35031.971 - 35270.284: 99.7024% ( 6) 00:12:18.305 35270.284 - 35508.596: 99.7437% ( 5) 00:12:18.305 35508.596 - 35746.909: 99.7933% ( 6) 00:12:18.305 35746.909 - 35985.222: 99.8264% ( 4) 00:12:18.305 35985.222 - 36223.535: 99.8760% ( 6) 00:12:18.305 36223.535 - 36461.847: 99.9173% ( 5) 00:12:18.305 36461.847 - 36700.160: 99.9669% ( 6) 00:12:18.305 36700.160 - 36938.473: 100.0000% ( 4) 00:12:18.305 00:12:18.305 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:18.305 ============================================================================== 00:12:18.305 Range in us Cumulative IO count 00:12:18.305 8162.211 - 8221.789: 0.0413% ( 5) 00:12:18.305 8221.789 - 8281.367: 0.1405% ( 12) 00:12:18.305 8281.367 - 8340.945: 0.3390% ( 24) 00:12:18.305 8340.945 - 8400.524: 0.6118% ( 33) 00:12:18.305 8400.524 - 8460.102: 0.9507% ( 41) 00:12:18.305 8460.102 - 8519.680: 1.3476% ( 48) 00:12:18.305 8519.680 - 8579.258: 1.9593% ( 74) 00:12:18.305 8579.258 - 8638.836: 2.6042% ( 78) 00:12:18.305 8638.836 - 8698.415: 3.4061% ( 97) 00:12:18.305 8698.415 - 8757.993: 4.2907% ( 107) 00:12:18.305 8757.993 - 8817.571: 5.2497% ( 116) 00:12:18.305 8817.571 - 8877.149: 6.2583% ( 122) 00:12:18.305 8877.149 - 8936.727: 7.3495% ( 132) 00:12:18.305 8936.727 - 8996.305: 8.4243% ( 130) 00:12:18.305 8996.305 - 9055.884: 9.5982% ( 142) 00:12:18.305 9055.884 - 9115.462: 10.8383% ( 150) 00:12:18.305 9115.462 - 9175.040: 12.1610% ( 160) 00:12:18.305 9175.040 - 9234.618: 13.4921% ( 161) 00:12:18.305 9234.618 - 9294.196: 14.9140% ( 172) 00:12:18.305 9294.196 - 9353.775: 16.3690% ( 176) 00:12:18.305 9353.775 - 9413.353: 17.8654% ( 181) 00:12:18.305 9413.353 - 9472.931: 19.7007% ( 222) 00:12:18.305 9472.931 - 9532.509: 21.4368% ( 210) 00:12:18.305 9532.509 - 9592.087: 23.6194% ( 264) 00:12:18.305 9592.087 - 9651.665: 25.8185% ( 266) 00:12:18.305 9651.665 - 9711.244: 28.1415% ( 281) 00:12:18.305 9711.244 - 9770.822: 30.4646% ( 281) 00:12:18.305 9770.822 - 9830.400: 32.8538% ( 289) 00:12:18.305 9830.400 - 9889.978: 35.2183% ( 286) 00:12:18.306 9889.978 - 9949.556: 37.8472% ( 318) 00:12:18.306 9949.556 - 10009.135: 40.5093% ( 322) 00:12:18.306 10009.135 - 10068.713: 43.3036% ( 338) 00:12:18.306 10068.713 - 10128.291: 46.1558% ( 345) 00:12:18.306 10128.291 - 10187.869: 49.0823% ( 354) 00:12:18.306 10187.869 - 10247.447: 51.8684% ( 337) 00:12:18.306 10247.447 - 10307.025: 54.4891% ( 317) 00:12:18.306 10307.025 - 10366.604: 57.2421% ( 333) 00:12:18.306 10366.604 - 10426.182: 59.9868% ( 332) 00:12:18.306 10426.182 - 10485.760: 62.6157% ( 318) 00:12:18.306 10485.760 - 10545.338: 65.3108% ( 326) 00:12:18.306 10545.338 - 10604.916: 67.9150% ( 315) 00:12:18.306 10604.916 - 10664.495: 70.3208% ( 291) 00:12:18.306 10664.495 - 10724.073: 72.5529% ( 270) 00:12:18.306 10724.073 - 10783.651: 74.6693% ( 256) 00:12:18.306 10783.651 - 10843.229: 76.4716% ( 218) 00:12:18.306 10843.229 - 10902.807: 78.1994% ( 209) 00:12:18.306 10902.807 - 10962.385: 79.7619% ( 189) 00:12:18.306 10962.385 - 11021.964: 81.2996% ( 186) 00:12:18.306 11021.964 - 11081.542: 82.6472% ( 163) 00:12:18.306 11081.542 - 11141.120: 83.7715% ( 136) 00:12:18.306 11141.120 - 11200.698: 84.7636% ( 120) 00:12:18.306 11200.698 - 11260.276: 85.6151% ( 103) 00:12:18.306 11260.276 - 11319.855: 86.3922% ( 94) 00:12:18.306 11319.855 - 11379.433: 87.1280% ( 89) 00:12:18.306 11379.433 - 11439.011: 87.8886% ( 92) 00:12:18.306 11439.011 - 11498.589: 88.5747% ( 83) 00:12:18.306 11498.589 - 11558.167: 89.2361% ( 80) 00:12:18.306 11558.167 - 11617.745: 89.7983% ( 68) 00:12:18.306 11617.745 - 11677.324: 90.3356% ( 65) 00:12:18.306 11677.324 - 11736.902: 90.8069% ( 57) 00:12:18.306 11736.902 - 11796.480: 91.1954% ( 47) 00:12:18.306 11796.480 - 11856.058: 91.6501% ( 55) 00:12:18.306 11856.058 - 11915.636: 92.0470% ( 48) 00:12:18.306 11915.636 - 11975.215: 92.4272% ( 46) 00:12:18.306 11975.215 - 12034.793: 92.8241% ( 48) 00:12:18.306 12034.793 - 12094.371: 93.2126% ( 47) 00:12:18.306 12094.371 - 12153.949: 93.5764% ( 44) 00:12:18.306 12153.949 - 12213.527: 93.9153% ( 41) 00:12:18.306 12213.527 - 12273.105: 94.2460% ( 40) 00:12:18.306 12273.105 - 12332.684: 94.5271% ( 34) 00:12:18.306 12332.684 - 12392.262: 94.8247% ( 36) 00:12:18.306 12392.262 - 12451.840: 95.0562% ( 28) 00:12:18.306 12451.840 - 12511.418: 95.3208% ( 32) 00:12:18.306 12511.418 - 12570.996: 95.5440% ( 27) 00:12:18.306 12570.996 - 12630.575: 95.7589% ( 26) 00:12:18.306 12630.575 - 12690.153: 95.9491% ( 23) 00:12:18.306 12690.153 - 12749.731: 96.1723% ( 27) 00:12:18.306 12749.731 - 12809.309: 96.3624% ( 23) 00:12:18.306 12809.309 - 12868.887: 96.5360% ( 21) 00:12:18.306 12868.887 - 12928.465: 96.7179% ( 22) 00:12:18.306 12928.465 - 12988.044: 96.8833% ( 20) 00:12:18.306 12988.044 - 13047.622: 97.0238% ( 17) 00:12:18.306 13047.622 - 13107.200: 97.1809% ( 19) 00:12:18.306 13107.200 - 13166.778: 97.3297% ( 18) 00:12:18.306 13166.778 - 13226.356: 97.4702% ( 17) 00:12:18.306 13226.356 - 13285.935: 97.6190% ( 18) 00:12:18.306 13285.935 - 13345.513: 97.7265% ( 13) 00:12:18.306 13345.513 - 13405.091: 97.8505% ( 15) 00:12:18.306 13405.091 - 13464.669: 97.9663% ( 14) 00:12:18.306 13464.669 - 13524.247: 98.0572% ( 11) 00:12:18.306 13524.247 - 13583.825: 98.1812% ( 15) 00:12:18.306 13583.825 - 13643.404: 98.2887% ( 13) 00:12:18.306 13643.404 - 13702.982: 98.4127% ( 15) 00:12:18.306 13702.982 - 13762.560: 98.5036% ( 11) 00:12:18.306 13762.560 - 13822.138: 98.5450% ( 5) 00:12:18.306 13822.138 - 13881.716: 98.6028% ( 7) 00:12:18.306 13881.716 - 13941.295: 98.6442% ( 5) 00:12:18.306 13941.295 - 14000.873: 98.6855% ( 5) 00:12:18.306 14000.873 - 14060.451: 98.7351% ( 6) 00:12:18.306 14060.451 - 14120.029: 98.7847% ( 6) 00:12:18.306 14120.029 - 14179.607: 98.8261% ( 5) 00:12:18.306 14179.607 - 14239.185: 98.8674% ( 5) 00:12:18.306 14239.185 - 14298.764: 98.8922% ( 3) 00:12:18.306 14298.764 - 14358.342: 98.9087% ( 2) 00:12:18.306 14358.342 - 14417.920: 98.9335% ( 3) 00:12:18.306 14417.920 - 14477.498: 98.9418% ( 1) 00:12:18.306 23354.647 - 23473.804: 98.9666% ( 3) 00:12:18.306 23473.804 - 23592.960: 98.9831% ( 2) 00:12:18.306 23592.960 - 23712.116: 98.9997% ( 2) 00:12:18.306 23712.116 - 23831.273: 99.0245% ( 3) 00:12:18.306 23831.273 - 23950.429: 99.0493% ( 3) 00:12:18.306 23950.429 - 24069.585: 99.0823% ( 4) 00:12:18.306 24069.585 - 24188.742: 99.0989% ( 2) 00:12:18.306 24188.742 - 24307.898: 99.1154% ( 2) 00:12:18.306 24307.898 - 24427.055: 99.1402% ( 3) 00:12:18.306 24427.055 - 24546.211: 99.1650% ( 3) 00:12:18.306 24546.211 - 24665.367: 99.1898% ( 3) 00:12:18.306 24665.367 - 24784.524: 99.2146% ( 3) 00:12:18.306 24784.524 - 24903.680: 99.2394% ( 3) 00:12:18.306 24903.680 - 25022.836: 99.2642% ( 3) 00:12:18.306 25022.836 - 25141.993: 99.2890% ( 3) 00:12:18.306 25141.993 - 25261.149: 99.3138% ( 3) 00:12:18.306 25261.149 - 25380.305: 99.3304% ( 2) 00:12:18.306 25380.305 - 25499.462: 99.3552% ( 3) 00:12:18.306 25499.462 - 25618.618: 99.3800% ( 3) 00:12:18.306 25618.618 - 25737.775: 99.4048% ( 3) 00:12:18.306 25737.775 - 25856.931: 99.4296% ( 3) 00:12:18.306 25856.931 - 25976.087: 99.4544% ( 3) 00:12:18.306 25976.087 - 26095.244: 99.4709% ( 2) 00:12:18.306 30980.655 - 31218.967: 99.4874% ( 2) 00:12:18.306 31218.967 - 31457.280: 99.5370% ( 6) 00:12:18.306 31457.280 - 31695.593: 99.5784% ( 5) 00:12:18.306 31695.593 - 31933.905: 99.6280% ( 6) 00:12:18.306 31933.905 - 32172.218: 99.6776% ( 6) 00:12:18.306 32172.218 - 32410.531: 99.7189% ( 5) 00:12:18.306 32410.531 - 32648.844: 99.7685% ( 6) 00:12:18.306 32648.844 - 32887.156: 99.8181% ( 6) 00:12:18.306 32887.156 - 33125.469: 99.8677% ( 6) 00:12:18.306 33125.469 - 33363.782: 99.9173% ( 6) 00:12:18.306 33363.782 - 33602.095: 99.9669% ( 6) 00:12:18.306 33602.095 - 33840.407: 100.0000% ( 4) 00:12:18.306 00:12:18.306 07:48:40 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:19.682 Initializing NVMe Controllers 00:12:19.682 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:19.682 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:19.682 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:19.682 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:19.682 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:19.682 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:19.682 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:19.682 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:19.682 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:19.682 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:19.682 Initialization complete. Launching workers. 00:12:19.682 ======================================================== 00:12:19.682 Latency(us) 00:12:19.682 Device Information : IOPS MiB/s Average min max 00:12:19.682 PCIE (0000:00:10.0) NSID 1 from core 0: 10996.37 128.86 11666.68 9507.91 45892.52 00:12:19.682 PCIE (0000:00:11.0) NSID 1 from core 0: 10996.37 128.86 11636.75 9728.29 42617.05 00:12:19.682 PCIE (0000:00:13.0) NSID 1 from core 0: 10996.37 128.86 11606.01 9801.62 40505.93 00:12:19.682 PCIE (0000:00:12.0) NSID 1 from core 0: 10996.37 128.86 11575.74 9859.23 37395.17 00:12:19.682 PCIE (0000:00:12.0) NSID 2 from core 0: 11060.30 129.61 11477.84 9603.41 28669.56 00:12:19.682 PCIE (0000:00:12.0) NSID 3 from core 0: 11060.30 129.61 11446.95 9778.75 25394.06 00:12:19.682 ======================================================== 00:12:19.682 Total : 66106.06 774.68 11568.12 9507.91 45892.52 00:12:19.682 00:12:19.682 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:19.682 ================================================================================= 00:12:19.682 1.00000% : 10009.135us 00:12:19.682 10.00000% : 10426.182us 00:12:19.682 25.00000% : 10724.073us 00:12:19.682 50.00000% : 11200.698us 00:12:19.682 75.00000% : 11796.480us 00:12:19.682 90.00000% : 12630.575us 00:12:19.682 95.00000% : 13405.091us 00:12:19.682 98.00000% : 14060.451us 00:12:19.682 99.00000% : 36461.847us 00:12:19.682 99.50000% : 43611.229us 00:12:19.682 99.90000% : 45517.731us 00:12:19.682 99.99000% : 45994.356us 00:12:19.682 99.99900% : 45994.356us 00:12:19.682 99.99990% : 45994.356us 00:12:19.682 99.99999% : 45994.356us 00:12:19.682 00:12:19.682 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:19.682 ================================================================================= 00:12:19.682 1.00000% : 10247.447us 00:12:19.682 10.00000% : 10604.916us 00:12:19.682 25.00000% : 10843.229us 00:12:19.682 50.00000% : 11200.698us 00:12:19.682 75.00000% : 11677.324us 00:12:19.682 90.00000% : 12630.575us 00:12:19.682 95.00000% : 13464.669us 00:12:19.682 98.00000% : 13881.716us 00:12:19.682 99.00000% : 33363.782us 00:12:19.682 99.50000% : 40513.164us 00:12:19.682 99.90000% : 42419.665us 00:12:19.682 99.99000% : 42657.978us 00:12:19.682 99.99900% : 42657.978us 00:12:19.682 99.99990% : 42657.978us 00:12:19.682 99.99999% : 42657.978us 00:12:19.682 00:12:19.682 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:19.682 ================================================================================= 00:12:19.682 1.00000% : 10187.869us 00:12:19.682 10.00000% : 10545.338us 00:12:19.682 25.00000% : 10843.229us 00:12:19.682 50.00000% : 11200.698us 00:12:19.682 75.00000% : 11677.324us 00:12:19.682 90.00000% : 12630.575us 00:12:19.682 95.00000% : 13285.935us 00:12:19.682 98.00000% : 13881.716us 00:12:19.682 99.00000% : 30980.655us 00:12:19.682 99.50000% : 38368.349us 00:12:19.682 99.90000% : 40274.851us 00:12:19.682 99.99000% : 40513.164us 00:12:19.682 99.99900% : 40513.164us 00:12:19.682 99.99990% : 40513.164us 00:12:19.682 99.99999% : 40513.164us 00:12:19.682 00:12:19.682 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:19.682 ================================================================================= 00:12:19.682 1.00000% : 10187.869us 00:12:19.682 10.00000% : 10545.338us 00:12:19.682 25.00000% : 10843.229us 00:12:19.682 50.00000% : 11200.698us 00:12:19.682 75.00000% : 11677.324us 00:12:19.682 90.00000% : 12630.575us 00:12:19.682 95.00000% : 13405.091us 00:12:19.682 98.00000% : 14000.873us 00:12:19.682 99.00000% : 28001.745us 00:12:19.682 99.50000% : 35270.284us 00:12:19.682 99.90000% : 37176.785us 00:12:19.682 99.99000% : 37415.098us 00:12:19.682 99.99900% : 37415.098us 00:12:19.682 99.99990% : 37415.098us 00:12:19.682 99.99999% : 37415.098us 00:12:19.682 00:12:19.683 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:19.683 ================================================================================= 00:12:19.683 1.00000% : 10128.291us 00:12:19.683 10.00000% : 10545.338us 00:12:19.683 25.00000% : 10843.229us 00:12:19.683 50.00000% : 11200.698us 00:12:19.683 75.00000% : 11677.324us 00:12:19.683 90.00000% : 12630.575us 00:12:19.683 95.00000% : 13226.356us 00:12:19.683 98.00000% : 14179.607us 00:12:19.683 99.00000% : 19065.018us 00:12:19.683 99.50000% : 26452.713us 00:12:19.683 99.90000% : 28240.058us 00:12:19.683 99.99000% : 28716.684us 00:12:19.683 99.99900% : 28716.684us 00:12:19.683 99.99990% : 28716.684us 00:12:19.683 99.99999% : 28716.684us 00:12:19.683 00:12:19.683 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:19.683 ================================================================================= 00:12:19.683 1.00000% : 10128.291us 00:12:19.683 10.00000% : 10545.338us 00:12:19.683 25.00000% : 10843.229us 00:12:19.683 50.00000% : 11200.698us 00:12:19.683 75.00000% : 11736.902us 00:12:19.683 90.00000% : 12630.575us 00:12:19.683 95.00000% : 13285.935us 00:12:19.683 98.00000% : 14358.342us 00:12:19.683 99.00000% : 15966.953us 00:12:19.683 99.50000% : 23235.491us 00:12:19.683 99.90000% : 25022.836us 00:12:19.683 99.99000% : 25380.305us 00:12:19.683 99.99900% : 25499.462us 00:12:19.683 99.99990% : 25499.462us 00:12:19.683 99.99999% : 25499.462us 00:12:19.683 00:12:19.683 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:19.683 ============================================================================== 00:12:19.683 Range in us Cumulative IO count 00:12:19.683 9472.931 - 9532.509: 0.0091% ( 1) 00:12:19.683 9532.509 - 9592.087: 0.0182% ( 1) 00:12:19.683 9592.087 - 9651.665: 0.0273% ( 1) 00:12:19.683 9711.244 - 9770.822: 0.0545% ( 3) 00:12:19.683 9770.822 - 9830.400: 0.0908% ( 4) 00:12:19.683 9830.400 - 9889.978: 0.3997% ( 34) 00:12:19.683 9889.978 - 9949.556: 0.8085% ( 45) 00:12:19.683 9949.556 - 10009.135: 1.2900% ( 53) 00:12:19.683 10009.135 - 10068.713: 2.0803% ( 87) 00:12:19.683 10068.713 - 10128.291: 2.5709% ( 54) 00:12:19.683 10128.291 - 10187.869: 3.3339% ( 84) 00:12:19.683 10187.869 - 10247.447: 4.6966% ( 150) 00:12:19.683 10247.447 - 10307.025: 6.4226% ( 190) 00:12:19.683 10307.025 - 10366.604: 8.6483% ( 245) 00:12:19.683 10366.604 - 10426.182: 11.4916% ( 313) 00:12:19.683 10426.182 - 10485.760: 15.0890% ( 396) 00:12:19.683 10485.760 - 10545.338: 17.4600% ( 261) 00:12:19.683 10545.338 - 10604.916: 19.8219% ( 260) 00:12:19.683 10604.916 - 10664.495: 22.9833% ( 348) 00:12:19.683 10664.495 - 10724.073: 26.2900% ( 364) 00:12:19.683 10724.073 - 10783.651: 29.2333% ( 324) 00:12:19.683 10783.651 - 10843.229: 32.3401% ( 342) 00:12:19.683 10843.229 - 10902.807: 35.1744% ( 312) 00:12:19.683 10902.807 - 10962.385: 38.1632% ( 329) 00:12:19.683 10962.385 - 11021.964: 41.4789% ( 365) 00:12:19.683 11021.964 - 11081.542: 44.5494% ( 338) 00:12:19.683 11081.542 - 11141.120: 48.1468% ( 396) 00:12:19.683 11141.120 - 11200.698: 51.3081% ( 348) 00:12:19.683 11200.698 - 11260.276: 54.6602% ( 369) 00:12:19.683 11260.276 - 11319.855: 57.6399% ( 328) 00:12:19.683 11319.855 - 11379.433: 60.2562% ( 288) 00:12:19.683 11379.433 - 11439.011: 62.9724% ( 299) 00:12:19.683 11439.011 - 11498.589: 65.5614% ( 285) 00:12:19.683 11498.589 - 11558.167: 67.8143% ( 248) 00:12:19.683 11558.167 - 11617.745: 69.8310% ( 222) 00:12:19.683 11617.745 - 11677.324: 71.6388% ( 199) 00:12:19.683 11677.324 - 11736.902: 73.8100% ( 239) 00:12:19.683 11736.902 - 11796.480: 75.7812% ( 217) 00:12:19.683 11796.480 - 11856.058: 77.6254% ( 203) 00:12:19.683 11856.058 - 11915.636: 79.3060% ( 185) 00:12:19.683 11915.636 - 11975.215: 80.7867% ( 163) 00:12:19.683 11975.215 - 12034.793: 82.1221% ( 147) 00:12:19.683 12034.793 - 12094.371: 83.2940% ( 129) 00:12:19.683 12094.371 - 12153.949: 84.3387% ( 115) 00:12:19.683 12153.949 - 12213.527: 85.4469% ( 122) 00:12:19.683 12213.527 - 12273.105: 86.4190% ( 107) 00:12:19.683 12273.105 - 12332.684: 87.1457% ( 80) 00:12:19.683 12332.684 - 12392.262: 87.8906% ( 82) 00:12:19.683 12392.262 - 12451.840: 88.5265% ( 70) 00:12:19.683 12451.840 - 12511.418: 89.0988% ( 63) 00:12:19.683 12511.418 - 12570.996: 89.6893% ( 65) 00:12:19.683 12570.996 - 12630.575: 90.1617% ( 52) 00:12:19.683 12630.575 - 12690.153: 90.5977% ( 48) 00:12:19.683 12690.153 - 12749.731: 91.1156% ( 57) 00:12:19.683 12749.731 - 12809.309: 91.6334% ( 57) 00:12:19.683 12809.309 - 12868.887: 92.0876% ( 50) 00:12:19.683 12868.887 - 12928.465: 92.3964% ( 34) 00:12:19.683 12928.465 - 12988.044: 92.7598% ( 40) 00:12:19.683 12988.044 - 13047.622: 93.1959% ( 48) 00:12:19.683 13047.622 - 13107.200: 93.5320% ( 37) 00:12:19.683 13107.200 - 13166.778: 93.7591% ( 25) 00:12:19.683 13166.778 - 13226.356: 94.1406% ( 42) 00:12:19.683 13226.356 - 13285.935: 94.4586% ( 35) 00:12:19.683 13285.935 - 13345.513: 94.7765% ( 35) 00:12:19.683 13345.513 - 13405.091: 95.1399% ( 40) 00:12:19.683 13405.091 - 13464.669: 95.4124% ( 30) 00:12:19.683 13464.669 - 13524.247: 95.6577% ( 27) 00:12:19.683 13524.247 - 13583.825: 95.9393% ( 31) 00:12:19.683 13583.825 - 13643.404: 96.2118% ( 30) 00:12:19.683 13643.404 - 13702.982: 96.6479% ( 48) 00:12:19.683 13702.982 - 13762.560: 96.9749% ( 36) 00:12:19.683 13762.560 - 13822.138: 97.2475% ( 30) 00:12:19.683 13822.138 - 13881.716: 97.4382% ( 21) 00:12:19.683 13881.716 - 13941.295: 97.6926% ( 28) 00:12:19.683 13941.295 - 14000.873: 97.8834% ( 21) 00:12:19.683 14000.873 - 14060.451: 98.0015% ( 13) 00:12:19.683 14060.451 - 14120.029: 98.1105% ( 12) 00:12:19.683 14120.029 - 14179.607: 98.2195% ( 12) 00:12:19.683 14179.607 - 14239.185: 98.3285% ( 12) 00:12:19.683 14239.185 - 14298.764: 98.3921% ( 7) 00:12:19.683 14298.764 - 14358.342: 98.4829% ( 10) 00:12:19.683 14358.342 - 14417.920: 98.5374% ( 6) 00:12:19.683 14417.920 - 14477.498: 98.5919% ( 6) 00:12:19.683 14477.498 - 14537.076: 98.6464% ( 6) 00:12:19.683 14537.076 - 14596.655: 98.6828% ( 4) 00:12:19.683 14596.655 - 14656.233: 98.7282% ( 5) 00:12:19.683 14656.233 - 14715.811: 98.7827% ( 6) 00:12:19.683 14715.811 - 14775.389: 98.8281% ( 5) 00:12:19.683 14775.389 - 14834.967: 98.8372% ( 1) 00:12:19.683 35270.284 - 35508.596: 98.8463% ( 1) 00:12:19.683 35508.596 - 35746.909: 98.8917% ( 5) 00:12:19.683 35746.909 - 35985.222: 98.9281% ( 4) 00:12:19.683 35985.222 - 36223.535: 98.9826% ( 6) 00:12:19.683 36223.535 - 36461.847: 99.0280% ( 5) 00:12:19.683 36461.847 - 36700.160: 99.0916% ( 7) 00:12:19.683 36700.160 - 36938.473: 99.1370% ( 5) 00:12:19.683 36938.473 - 37176.785: 99.1824% ( 5) 00:12:19.683 37176.785 - 37415.098: 99.2369% ( 6) 00:12:19.683 37415.098 - 37653.411: 99.2914% ( 6) 00:12:19.683 37653.411 - 37891.724: 99.3278% ( 4) 00:12:19.683 37891.724 - 38130.036: 99.3732% ( 5) 00:12:19.683 38130.036 - 38368.349: 99.4186% ( 5) 00:12:19.683 42896.291 - 43134.604: 99.4368% ( 2) 00:12:19.683 43134.604 - 43372.916: 99.4822% ( 5) 00:12:19.683 43372.916 - 43611.229: 99.5367% ( 6) 00:12:19.683 43611.229 - 43849.542: 99.5821% ( 5) 00:12:19.683 43849.542 - 44087.855: 99.6366% ( 6) 00:12:19.683 44087.855 - 44326.167: 99.6820% ( 5) 00:12:19.683 44326.167 - 44564.480: 99.7366% ( 6) 00:12:19.683 44564.480 - 44802.793: 99.7820% ( 5) 00:12:19.683 44802.793 - 45041.105: 99.8365% ( 6) 00:12:19.683 45041.105 - 45279.418: 99.8819% ( 5) 00:12:19.683 45279.418 - 45517.731: 99.9364% ( 6) 00:12:19.683 45517.731 - 45756.044: 99.9818% ( 5) 00:12:19.683 45756.044 - 45994.356: 100.0000% ( 2) 00:12:19.683 00:12:19.683 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:19.684 ============================================================================== 00:12:19.684 Range in us Cumulative IO count 00:12:19.684 9711.244 - 9770.822: 0.0091% ( 1) 00:12:19.684 9830.400 - 9889.978: 0.0273% ( 2) 00:12:19.684 9889.978 - 9949.556: 0.0727% ( 5) 00:12:19.684 9949.556 - 10009.135: 0.1272% ( 6) 00:12:19.684 10009.135 - 10068.713: 0.2725% ( 16) 00:12:19.684 10068.713 - 10128.291: 0.4724% ( 22) 00:12:19.684 10128.291 - 10187.869: 0.9084% ( 48) 00:12:19.684 10187.869 - 10247.447: 1.6443% ( 81) 00:12:19.684 10247.447 - 10307.025: 2.6617% ( 112) 00:12:19.684 10307.025 - 10366.604: 3.8790% ( 134) 00:12:19.684 10366.604 - 10426.182: 5.5414% ( 183) 00:12:19.684 10426.182 - 10485.760: 7.5945% ( 226) 00:12:19.684 10485.760 - 10545.338: 9.9019% ( 254) 00:12:19.684 10545.338 - 10604.916: 12.9270% ( 333) 00:12:19.684 10604.916 - 10664.495: 16.3336% ( 375) 00:12:19.684 10664.495 - 10724.073: 20.0491% ( 409) 00:12:19.684 10724.073 - 10783.651: 24.1733% ( 454) 00:12:19.684 10783.651 - 10843.229: 28.5883% ( 486) 00:12:19.684 10843.229 - 10902.807: 32.7762% ( 461) 00:12:19.684 10902.807 - 10962.385: 37.0549% ( 471) 00:12:19.684 10962.385 - 11021.964: 41.2882% ( 466) 00:12:19.684 11021.964 - 11081.542: 45.4215% ( 455) 00:12:19.684 11081.542 - 11141.120: 49.8637% ( 489) 00:12:19.684 11141.120 - 11200.698: 53.4884% ( 399) 00:12:19.684 11200.698 - 11260.276: 57.2220% ( 411) 00:12:19.684 11260.276 - 11319.855: 60.5378% ( 365) 00:12:19.684 11319.855 - 11379.433: 63.6537% ( 343) 00:12:19.684 11379.433 - 11439.011: 66.8786% ( 355) 00:12:19.684 11439.011 - 11498.589: 69.5676% ( 296) 00:12:19.684 11498.589 - 11558.167: 72.4110% ( 313) 00:12:19.684 11558.167 - 11617.745: 74.9727% ( 282) 00:12:19.684 11617.745 - 11677.324: 76.9168% ( 214) 00:12:19.684 11677.324 - 11736.902: 78.5429% ( 179) 00:12:19.684 11736.902 - 11796.480: 79.8056% ( 139) 00:12:19.684 11796.480 - 11856.058: 81.0047% ( 132) 00:12:19.684 11856.058 - 11915.636: 82.2311% ( 135) 00:12:19.684 11915.636 - 11975.215: 83.1940% ( 106) 00:12:19.684 11975.215 - 12034.793: 84.1025% ( 100) 00:12:19.684 12034.793 - 12094.371: 84.8837% ( 86) 00:12:19.684 12094.371 - 12153.949: 85.7195% ( 92) 00:12:19.684 12153.949 - 12213.527: 86.3554% ( 70) 00:12:19.684 12213.527 - 12273.105: 86.9549% ( 66) 00:12:19.684 12273.105 - 12332.684: 87.5363% ( 64) 00:12:19.684 12332.684 - 12392.262: 88.1086% ( 63) 00:12:19.684 12392.262 - 12451.840: 88.6355% ( 58) 00:12:19.684 12451.840 - 12511.418: 89.1624% ( 58) 00:12:19.684 12511.418 - 12570.996: 89.5894% ( 47) 00:12:19.684 12570.996 - 12630.575: 90.1072% ( 57) 00:12:19.684 12630.575 - 12690.153: 90.6250% ( 57) 00:12:19.684 12690.153 - 12749.731: 91.0247% ( 44) 00:12:19.684 12749.731 - 12809.309: 91.4426% ( 46) 00:12:19.684 12809.309 - 12868.887: 91.8514% ( 45) 00:12:19.684 12868.887 - 12928.465: 92.1421% ( 32) 00:12:19.684 12928.465 - 12988.044: 92.5690% ( 47) 00:12:19.684 12988.044 - 13047.622: 92.8597% ( 32) 00:12:19.684 13047.622 - 13107.200: 93.1868% ( 36) 00:12:19.684 13107.200 - 13166.778: 93.4684% ( 31) 00:12:19.684 13166.778 - 13226.356: 93.7227% ( 28) 00:12:19.684 13226.356 - 13285.935: 94.1134% ( 43) 00:12:19.684 13285.935 - 13345.513: 94.5131% ( 44) 00:12:19.684 13345.513 - 13405.091: 94.8219% ( 34) 00:12:19.684 13405.091 - 13464.669: 95.1036% ( 31) 00:12:19.684 13464.669 - 13524.247: 95.5578% ( 50) 00:12:19.684 13524.247 - 13583.825: 96.1846% ( 69) 00:12:19.684 13583.825 - 13643.404: 96.6661% ( 53) 00:12:19.684 13643.404 - 13702.982: 97.2656% ( 66) 00:12:19.684 13702.982 - 13762.560: 97.5291% ( 29) 00:12:19.684 13762.560 - 13822.138: 97.8198% ( 32) 00:12:19.684 13822.138 - 13881.716: 98.0923% ( 30) 00:12:19.684 13881.716 - 13941.295: 98.2831% ( 21) 00:12:19.684 13941.295 - 14000.873: 98.4466% ( 18) 00:12:19.684 14000.873 - 14060.451: 98.5919% ( 16) 00:12:19.684 14060.451 - 14120.029: 98.6464% ( 6) 00:12:19.684 14120.029 - 14179.607: 98.7100% ( 7) 00:12:19.684 14179.607 - 14239.185: 98.7191% ( 1) 00:12:19.684 14239.185 - 14298.764: 98.7464% ( 3) 00:12:19.684 14298.764 - 14358.342: 98.7736% ( 3) 00:12:19.684 14358.342 - 14417.920: 98.7918% ( 2) 00:12:19.684 14417.920 - 14477.498: 98.8190% ( 3) 00:12:19.684 14477.498 - 14537.076: 98.8372% ( 2) 00:12:19.684 32648.844 - 32887.156: 98.8917% ( 6) 00:12:19.684 32887.156 - 33125.469: 98.9462% ( 6) 00:12:19.684 33125.469 - 33363.782: 99.0007% ( 6) 00:12:19.684 33363.782 - 33602.095: 99.0552% ( 6) 00:12:19.684 33602.095 - 33840.407: 99.1097% ( 6) 00:12:19.684 33840.407 - 34078.720: 99.1642% ( 6) 00:12:19.684 34078.720 - 34317.033: 99.2097% ( 5) 00:12:19.684 34317.033 - 34555.345: 99.2642% ( 6) 00:12:19.684 34555.345 - 34793.658: 99.3187% ( 6) 00:12:19.684 34793.658 - 35031.971: 99.3732% ( 6) 00:12:19.684 35031.971 - 35270.284: 99.4186% ( 5) 00:12:19.684 40036.538 - 40274.851: 99.4549% ( 4) 00:12:19.684 40274.851 - 40513.164: 99.5185% ( 7) 00:12:19.684 40513.164 - 40751.476: 99.5640% ( 5) 00:12:19.684 40751.476 - 40989.789: 99.6185% ( 6) 00:12:19.684 40989.789 - 41228.102: 99.6730% ( 6) 00:12:19.684 41228.102 - 41466.415: 99.7275% ( 6) 00:12:19.684 41466.415 - 41704.727: 99.7820% ( 6) 00:12:19.684 41704.727 - 41943.040: 99.8365% ( 6) 00:12:19.684 41943.040 - 42181.353: 99.8910% ( 6) 00:12:19.684 42181.353 - 42419.665: 99.9455% ( 6) 00:12:19.684 42419.665 - 42657.978: 100.0000% ( 6) 00:12:19.684 00:12:19.684 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:19.684 ============================================================================== 00:12:19.684 Range in us Cumulative IO count 00:12:19.684 9770.822 - 9830.400: 0.0091% ( 1) 00:12:19.684 9889.978 - 9949.556: 0.0818% ( 8) 00:12:19.684 9949.556 - 10009.135: 0.1817% ( 11) 00:12:19.684 10009.135 - 10068.713: 0.3452% ( 18) 00:12:19.684 10068.713 - 10128.291: 0.6086% ( 29) 00:12:19.684 10128.291 - 10187.869: 1.1628% ( 61) 00:12:19.684 10187.869 - 10247.447: 1.9895% ( 91) 00:12:19.684 10247.447 - 10307.025: 3.2794% ( 142) 00:12:19.684 10307.025 - 10366.604: 4.5149% ( 136) 00:12:19.684 10366.604 - 10426.182: 6.4499% ( 213) 00:12:19.684 10426.182 - 10485.760: 8.5211% ( 228) 00:12:19.684 10485.760 - 10545.338: 11.0919% ( 283) 00:12:19.684 10545.338 - 10604.916: 13.8808% ( 307) 00:12:19.684 10604.916 - 10664.495: 17.0331% ( 347) 00:12:19.684 10664.495 - 10724.073: 20.5487% ( 387) 00:12:19.684 10724.073 - 10783.651: 24.7184% ( 459) 00:12:19.684 10783.651 - 10843.229: 28.7427% ( 443) 00:12:19.684 10843.229 - 10902.807: 32.7035% ( 436) 00:12:19.684 10902.807 - 10962.385: 37.1730% ( 492) 00:12:19.684 10962.385 - 11021.964: 41.4426% ( 470) 00:12:19.684 11021.964 - 11081.542: 45.3761% ( 433) 00:12:19.684 11081.542 - 11141.120: 49.4277% ( 446) 00:12:19.684 11141.120 - 11200.698: 53.3794% ( 435) 00:12:19.684 11200.698 - 11260.276: 57.5491% ( 459) 00:12:19.684 11260.276 - 11319.855: 60.8830% ( 367) 00:12:19.684 11319.855 - 11379.433: 64.2169% ( 367) 00:12:19.684 11379.433 - 11439.011: 67.2693% ( 336) 00:12:19.684 11439.011 - 11498.589: 69.9310% ( 293) 00:12:19.684 11498.589 - 11558.167: 72.3020% ( 261) 00:12:19.684 11558.167 - 11617.745: 74.3187% ( 222) 00:12:19.684 11617.745 - 11677.324: 75.9084% ( 175) 00:12:19.684 11677.324 - 11736.902: 77.3801% ( 162) 00:12:19.684 11736.902 - 11796.480: 78.6065% ( 135) 00:12:19.684 11796.480 - 11856.058: 79.6421% ( 114) 00:12:19.684 11856.058 - 11915.636: 80.5687% ( 102) 00:12:19.684 11915.636 - 11975.215: 81.6497% ( 119) 00:12:19.684 11975.215 - 12034.793: 82.6126% ( 106) 00:12:19.684 12034.793 - 12094.371: 83.6573% ( 115) 00:12:19.684 12094.371 - 12153.949: 84.6021% ( 104) 00:12:19.684 12153.949 - 12213.527: 85.3379% ( 81) 00:12:19.684 12213.527 - 12273.105: 86.1555% ( 90) 00:12:19.684 12273.105 - 12332.684: 87.1730% ( 112) 00:12:19.684 12332.684 - 12392.262: 87.8452% ( 74) 00:12:19.684 12392.262 - 12451.840: 88.6174% ( 85) 00:12:19.684 12451.840 - 12511.418: 89.1170% ( 55) 00:12:19.684 12511.418 - 12570.996: 89.6530% ( 59) 00:12:19.684 12570.996 - 12630.575: 90.2980% ( 71) 00:12:19.684 12630.575 - 12690.153: 90.6704% ( 41) 00:12:19.684 12690.153 - 12749.731: 91.1337% ( 51) 00:12:19.684 12749.731 - 12809.309: 91.5879% ( 50) 00:12:19.684 12809.309 - 12868.887: 91.8968% ( 34) 00:12:19.684 12868.887 - 12928.465: 92.4237% ( 58) 00:12:19.684 12928.465 - 12988.044: 93.0051% ( 64) 00:12:19.684 12988.044 - 13047.622: 93.4593% ( 50) 00:12:19.684 13047.622 - 13107.200: 93.9680% ( 56) 00:12:19.684 13107.200 - 13166.778: 94.4404% ( 52) 00:12:19.684 13166.778 - 13226.356: 94.9128% ( 52) 00:12:19.684 13226.356 - 13285.935: 95.3852% ( 52) 00:12:19.684 13285.935 - 13345.513: 95.6850% ( 33) 00:12:19.684 13345.513 - 13405.091: 95.9575% ( 30) 00:12:19.684 13405.091 - 13464.669: 96.1755% ( 24) 00:12:19.684 13464.669 - 13524.247: 96.4571% ( 31) 00:12:19.684 13524.247 - 13583.825: 96.6933% ( 26) 00:12:19.684 13583.825 - 13643.404: 97.0567% ( 40) 00:12:19.684 13643.404 - 13702.982: 97.2929% ( 26) 00:12:19.684 13702.982 - 13762.560: 97.4927% ( 22) 00:12:19.684 13762.560 - 13822.138: 97.7562% ( 29) 00:12:19.684 13822.138 - 13881.716: 98.1195% ( 40) 00:12:19.684 13881.716 - 13941.295: 98.3012% ( 20) 00:12:19.684 13941.295 - 14000.873: 98.4284% ( 14) 00:12:19.684 14000.873 - 14060.451: 98.5374% ( 12) 00:12:19.684 14060.451 - 14120.029: 98.6283% ( 10) 00:12:19.684 14120.029 - 14179.607: 98.6828% ( 6) 00:12:19.684 14179.607 - 14239.185: 98.7464% ( 7) 00:12:19.684 14239.185 - 14298.764: 98.7827% ( 4) 00:12:19.684 14298.764 - 14358.342: 98.8009% ( 2) 00:12:19.684 14358.342 - 14417.920: 98.8281% ( 3) 00:12:19.684 14417.920 - 14477.498: 98.8372% ( 1) 00:12:19.684 30027.404 - 30146.560: 98.8463% ( 1) 00:12:19.684 30146.560 - 30265.716: 98.8645% ( 2) 00:12:19.685 30265.716 - 30384.873: 98.8917% ( 3) 00:12:19.685 30384.873 - 30504.029: 98.9281% ( 4) 00:12:19.685 30504.029 - 30742.342: 98.9826% ( 6) 00:12:19.685 30742.342 - 30980.655: 99.0280% ( 5) 00:12:19.685 30980.655 - 31218.967: 99.0825% ( 6) 00:12:19.685 31218.967 - 31457.280: 99.1370% ( 6) 00:12:19.685 31457.280 - 31695.593: 99.1824% ( 5) 00:12:19.685 31695.593 - 31933.905: 99.2369% ( 6) 00:12:19.685 31933.905 - 32172.218: 99.2914% ( 6) 00:12:19.685 32172.218 - 32410.531: 99.3368% ( 5) 00:12:19.685 32410.531 - 32648.844: 99.3823% ( 5) 00:12:19.685 32648.844 - 32887.156: 99.4186% ( 4) 00:12:19.685 37891.724 - 38130.036: 99.4731% ( 6) 00:12:19.685 38130.036 - 38368.349: 99.5276% ( 6) 00:12:19.685 38368.349 - 38606.662: 99.5730% ( 5) 00:12:19.685 38606.662 - 38844.975: 99.6275% ( 6) 00:12:19.685 38844.975 - 39083.287: 99.6820% ( 6) 00:12:19.685 39083.287 - 39321.600: 99.7366% ( 6) 00:12:19.685 39321.600 - 39559.913: 99.7911% ( 6) 00:12:19.685 39559.913 - 39798.225: 99.8456% ( 6) 00:12:19.685 39798.225 - 40036.538: 99.8910% ( 5) 00:12:19.685 40036.538 - 40274.851: 99.9455% ( 6) 00:12:19.685 40274.851 - 40513.164: 100.0000% ( 6) 00:12:19.685 00:12:19.685 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:19.685 ============================================================================== 00:12:19.685 Range in us Cumulative IO count 00:12:19.685 9830.400 - 9889.978: 0.0273% ( 3) 00:12:19.685 9889.978 - 9949.556: 0.1181% ( 10) 00:12:19.685 9949.556 - 10009.135: 0.2907% ( 19) 00:12:19.685 10009.135 - 10068.713: 0.5360% ( 27) 00:12:19.685 10068.713 - 10128.291: 0.8630% ( 36) 00:12:19.685 10128.291 - 10187.869: 1.4535% ( 65) 00:12:19.685 10187.869 - 10247.447: 2.2166% ( 84) 00:12:19.685 10247.447 - 10307.025: 3.2249% ( 111) 00:12:19.685 10307.025 - 10366.604: 4.4059% ( 130) 00:12:19.685 10366.604 - 10426.182: 6.1773% ( 195) 00:12:19.685 10426.182 - 10485.760: 8.3576% ( 240) 00:12:19.685 10485.760 - 10545.338: 10.8194% ( 271) 00:12:19.685 10545.338 - 10604.916: 13.6810% ( 315) 00:12:19.685 10604.916 - 10664.495: 17.0694% ( 373) 00:12:19.685 10664.495 - 10724.073: 20.9393% ( 426) 00:12:19.685 10724.073 - 10783.651: 24.7184% ( 416) 00:12:19.685 10783.651 - 10843.229: 28.6973% ( 438) 00:12:19.685 10843.229 - 10902.807: 32.7398% ( 445) 00:12:19.685 10902.807 - 10962.385: 36.7369% ( 440) 00:12:19.685 10962.385 - 11021.964: 40.7340% ( 440) 00:12:19.685 11021.964 - 11081.542: 44.3859% ( 402) 00:12:19.685 11081.542 - 11141.120: 48.4920% ( 452) 00:12:19.685 11141.120 - 11200.698: 52.2892% ( 418) 00:12:19.685 11200.698 - 11260.276: 56.1047% ( 420) 00:12:19.685 11260.276 - 11319.855: 59.8565% ( 413) 00:12:19.685 11319.855 - 11379.433: 63.5447% ( 406) 00:12:19.685 11379.433 - 11439.011: 66.8241% ( 361) 00:12:19.685 11439.011 - 11498.589: 69.3496% ( 278) 00:12:19.685 11498.589 - 11558.167: 71.5843% ( 246) 00:12:19.685 11558.167 - 11617.745: 73.7827% ( 242) 00:12:19.685 11617.745 - 11677.324: 75.9448% ( 238) 00:12:19.685 11677.324 - 11736.902: 77.6890% ( 192) 00:12:19.685 11736.902 - 11796.480: 79.5058% ( 200) 00:12:19.685 11796.480 - 11856.058: 80.8957% ( 153) 00:12:19.685 11856.058 - 11915.636: 82.1312% ( 136) 00:12:19.685 11915.636 - 11975.215: 82.9033% ( 85) 00:12:19.685 11975.215 - 12034.793: 83.6846% ( 86) 00:12:19.685 12034.793 - 12094.371: 84.3841% ( 77) 00:12:19.685 12094.371 - 12153.949: 85.0018% ( 68) 00:12:19.685 12153.949 - 12213.527: 85.6559% ( 72) 00:12:19.685 12213.527 - 12273.105: 86.3826% ( 80) 00:12:19.685 12273.105 - 12332.684: 87.0912% ( 78) 00:12:19.685 12332.684 - 12392.262: 87.9360% ( 93) 00:12:19.685 12392.262 - 12451.840: 88.3993% ( 51) 00:12:19.685 12451.840 - 12511.418: 88.9807% ( 64) 00:12:19.685 12511.418 - 12570.996: 89.5258% ( 60) 00:12:19.685 12570.996 - 12630.575: 90.0527% ( 58) 00:12:19.685 12630.575 - 12690.153: 90.6159% ( 62) 00:12:19.685 12690.153 - 12749.731: 91.2427% ( 69) 00:12:19.685 12749.731 - 12809.309: 91.7787% ( 59) 00:12:19.685 12809.309 - 12868.887: 92.4509% ( 74) 00:12:19.685 12868.887 - 12928.465: 92.9415% ( 54) 00:12:19.685 12928.465 - 12988.044: 93.3140% ( 41) 00:12:19.685 12988.044 - 13047.622: 93.6864% ( 41) 00:12:19.685 13047.622 - 13107.200: 93.9499% ( 29) 00:12:19.685 13107.200 - 13166.778: 94.3405% ( 43) 00:12:19.685 13166.778 - 13226.356: 94.5948% ( 28) 00:12:19.685 13226.356 - 13285.935: 94.7856% ( 21) 00:12:19.685 13285.935 - 13345.513: 94.9673% ( 20) 00:12:19.685 13345.513 - 13405.091: 95.2489% ( 31) 00:12:19.685 13405.091 - 13464.669: 95.5941% ( 38) 00:12:19.685 13464.669 - 13524.247: 95.9757% ( 42) 00:12:19.685 13524.247 - 13583.825: 96.3209% ( 38) 00:12:19.685 13583.825 - 13643.404: 96.7024% ( 42) 00:12:19.685 13643.404 - 13702.982: 96.9386% ( 26) 00:12:19.685 13702.982 - 13762.560: 97.2293% ( 32) 00:12:19.685 13762.560 - 13822.138: 97.4382% ( 23) 00:12:19.685 13822.138 - 13881.716: 97.6472% ( 23) 00:12:19.685 13881.716 - 13941.295: 97.8652% ( 24) 00:12:19.685 13941.295 - 14000.873: 98.0560% ( 21) 00:12:19.685 14000.873 - 14060.451: 98.1741% ( 13) 00:12:19.685 14060.451 - 14120.029: 98.3012% ( 14) 00:12:19.685 14120.029 - 14179.607: 98.3921% ( 10) 00:12:19.685 14179.607 - 14239.185: 98.5283% ( 15) 00:12:19.685 14239.185 - 14298.764: 98.5828% ( 6) 00:12:19.685 14298.764 - 14358.342: 98.6374% ( 6) 00:12:19.685 14358.342 - 14417.920: 98.6828% ( 5) 00:12:19.685 14417.920 - 14477.498: 98.7191% ( 4) 00:12:19.685 14477.498 - 14537.076: 98.7464% ( 3) 00:12:19.685 14537.076 - 14596.655: 98.7918% ( 5) 00:12:19.685 14596.655 - 14656.233: 98.8100% ( 2) 00:12:19.685 14656.233 - 14715.811: 98.8281% ( 2) 00:12:19.944 14715.811 - 14775.389: 98.8372% ( 1) 00:12:19.944 27167.651 - 27286.807: 98.8554% ( 2) 00:12:19.944 27286.807 - 27405.964: 98.8826% ( 3) 00:12:19.944 27405.964 - 27525.120: 98.9099% ( 3) 00:12:19.944 27525.120 - 27644.276: 98.9281% ( 2) 00:12:19.944 27644.276 - 27763.433: 98.9553% ( 3) 00:12:19.945 27763.433 - 27882.589: 98.9826% ( 3) 00:12:19.945 27882.589 - 28001.745: 99.0098% ( 3) 00:12:19.945 28001.745 - 28120.902: 99.0461% ( 4) 00:12:19.945 28120.902 - 28240.058: 99.0734% ( 3) 00:12:19.945 28240.058 - 28359.215: 99.1007% ( 3) 00:12:19.945 28359.215 - 28478.371: 99.1279% ( 3) 00:12:19.945 28478.371 - 28597.527: 99.1552% ( 3) 00:12:19.945 28597.527 - 28716.684: 99.1824% ( 3) 00:12:19.945 28716.684 - 28835.840: 99.2097% ( 3) 00:12:19.945 28835.840 - 28954.996: 99.2369% ( 3) 00:12:19.945 28954.996 - 29074.153: 99.2642% ( 3) 00:12:19.945 29074.153 - 29193.309: 99.2823% ( 2) 00:12:19.945 29193.309 - 29312.465: 99.3096% ( 3) 00:12:19.945 29312.465 - 29431.622: 99.3278% ( 2) 00:12:19.945 29431.622 - 29550.778: 99.3641% ( 4) 00:12:19.945 29550.778 - 29669.935: 99.3914% ( 3) 00:12:19.945 29669.935 - 29789.091: 99.4095% ( 2) 00:12:19.945 29789.091 - 29908.247: 99.4186% ( 1) 00:12:19.945 34793.658 - 35031.971: 99.4459% ( 3) 00:12:19.945 35031.971 - 35270.284: 99.5004% ( 6) 00:12:19.945 35270.284 - 35508.596: 99.5549% ( 6) 00:12:19.945 35508.596 - 35746.909: 99.6094% ( 6) 00:12:19.945 35746.909 - 35985.222: 99.6730% ( 7) 00:12:19.945 35985.222 - 36223.535: 99.7275% ( 6) 00:12:19.945 36223.535 - 36461.847: 99.7820% ( 6) 00:12:19.945 36461.847 - 36700.160: 99.8365% ( 6) 00:12:19.945 36700.160 - 36938.473: 99.8910% ( 6) 00:12:19.945 36938.473 - 37176.785: 99.9455% ( 6) 00:12:19.945 37176.785 - 37415.098: 100.0000% ( 6) 00:12:19.945 00:12:19.945 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:19.945 ============================================================================== 00:12:19.945 Range in us Cumulative IO count 00:12:19.945 9592.087 - 9651.665: 0.0090% ( 1) 00:12:19.945 9830.400 - 9889.978: 0.0723% ( 7) 00:12:19.945 9889.978 - 9949.556: 0.1987% ( 14) 00:12:19.945 9949.556 - 10009.135: 0.3432% ( 16) 00:12:19.945 10009.135 - 10068.713: 0.6774% ( 37) 00:12:19.945 10068.713 - 10128.291: 1.2283% ( 61) 00:12:19.945 10128.291 - 10187.869: 1.7522% ( 58) 00:12:19.945 10187.869 - 10247.447: 2.3754% ( 69) 00:12:19.945 10247.447 - 10307.025: 3.2514% ( 97) 00:12:19.945 10307.025 - 10366.604: 4.5249% ( 141) 00:12:19.945 10366.604 - 10426.182: 6.2410% ( 190) 00:12:19.945 10426.182 - 10485.760: 8.5079% ( 251) 00:12:19.945 10485.760 - 10545.338: 11.0188% ( 278) 00:12:19.945 10545.338 - 10604.916: 14.0173% ( 332) 00:12:19.945 10604.916 - 10664.495: 17.3862% ( 373) 00:12:19.945 10664.495 - 10724.073: 21.0712% ( 408) 00:12:19.945 10724.073 - 10783.651: 24.7381% ( 406) 00:12:19.945 10783.651 - 10843.229: 28.8114% ( 451) 00:12:19.945 10843.229 - 10902.807: 32.7764% ( 439) 00:12:19.945 10902.807 - 10962.385: 36.5155% ( 414) 00:12:19.945 10962.385 - 11021.964: 40.3179% ( 421) 00:12:19.945 11021.964 - 11081.542: 43.8403% ( 390) 00:12:19.945 11081.542 - 11141.120: 47.6879% ( 426) 00:12:19.945 11141.120 - 11200.698: 51.4180% ( 413) 00:12:19.945 11200.698 - 11260.276: 54.8230% ( 377) 00:12:19.945 11260.276 - 11319.855: 58.4899% ( 406) 00:12:19.945 11319.855 - 11379.433: 61.9129% ( 379) 00:12:19.945 11379.433 - 11439.011: 65.5798% ( 406) 00:12:19.945 11439.011 - 11498.589: 68.8223% ( 359) 00:12:19.945 11498.589 - 11558.167: 71.2518% ( 269) 00:12:19.945 11558.167 - 11617.745: 73.6904% ( 270) 00:12:19.945 11617.745 - 11677.324: 75.9754% ( 253) 00:12:19.945 11677.324 - 11736.902: 77.6463% ( 185) 00:12:19.945 11736.902 - 11796.480: 79.1004% ( 161) 00:12:19.945 11796.480 - 11856.058: 80.4371% ( 148) 00:12:19.945 11856.058 - 11915.636: 81.7467% ( 145) 00:12:19.945 11915.636 - 11975.215: 82.8938% ( 127) 00:12:19.945 11975.215 - 12034.793: 83.8963% ( 111) 00:12:19.945 12034.793 - 12094.371: 84.6279% ( 81) 00:12:19.945 12094.371 - 12153.949: 85.2150% ( 65) 00:12:19.945 12153.949 - 12213.527: 85.9736% ( 84) 00:12:19.945 12213.527 - 12273.105: 86.6059% ( 70) 00:12:19.945 12273.105 - 12332.684: 87.3013% ( 77) 00:12:19.945 12332.684 - 12392.262: 87.9426% ( 71) 00:12:19.945 12392.262 - 12451.840: 88.5567% ( 68) 00:12:19.945 12451.840 - 12511.418: 89.0986% ( 60) 00:12:19.945 12511.418 - 12570.996: 89.6044% ( 56) 00:12:19.945 12570.996 - 12630.575: 90.1553% ( 61) 00:12:19.945 12630.575 - 12690.153: 90.6521% ( 55) 00:12:19.945 12690.153 - 12749.731: 91.1037% ( 50) 00:12:19.945 12749.731 - 12809.309: 91.5824% ( 53) 00:12:19.945 12809.309 - 12868.887: 92.2598% ( 75) 00:12:19.945 12868.887 - 12928.465: 92.8197% ( 62) 00:12:19.945 12928.465 - 12988.044: 93.2623% ( 49) 00:12:19.945 12988.044 - 13047.622: 93.7771% ( 57) 00:12:19.945 13047.622 - 13107.200: 94.1564% ( 42) 00:12:19.945 13107.200 - 13166.778: 94.5719% ( 46) 00:12:19.945 13166.778 - 13226.356: 95.0325% ( 51) 00:12:19.945 13226.356 - 13285.935: 95.2854% ( 28) 00:12:19.945 13285.935 - 13345.513: 95.5383% ( 28) 00:12:19.945 13345.513 - 13405.091: 95.8544% ( 35) 00:12:19.945 13405.091 - 13464.669: 96.0802% ( 25) 00:12:19.945 13464.669 - 13524.247: 96.2157% ( 15) 00:12:19.945 13524.247 - 13583.825: 96.3421% ( 14) 00:12:19.945 13583.825 - 13643.404: 96.4866% ( 16) 00:12:19.945 13643.404 - 13702.982: 96.7305% ( 27) 00:12:19.945 13702.982 - 13762.560: 96.8569% ( 14) 00:12:19.945 13762.560 - 13822.138: 96.9834% ( 14) 00:12:19.945 13822.138 - 13881.716: 97.1730% ( 21) 00:12:19.945 13881.716 - 13941.295: 97.3537% ( 20) 00:12:19.945 13941.295 - 14000.873: 97.7059% ( 39) 00:12:19.945 14000.873 - 14060.451: 97.8504% ( 16) 00:12:19.945 14060.451 - 14120.029: 97.9678% ( 13) 00:12:19.945 14120.029 - 14179.607: 98.0762% ( 12) 00:12:19.945 14179.607 - 14239.185: 98.1485% ( 8) 00:12:19.945 14239.185 - 14298.764: 98.2207% ( 8) 00:12:19.945 14298.764 - 14358.342: 98.3472% ( 14) 00:12:19.945 14358.342 - 14417.920: 98.4375% ( 10) 00:12:19.945 14417.920 - 14477.498: 98.5188% ( 9) 00:12:19.945 14477.498 - 14537.076: 98.6001% ( 9) 00:12:19.945 14537.076 - 14596.655: 98.6633% ( 7) 00:12:19.945 14596.655 - 14656.233: 98.7175% ( 6) 00:12:19.945 14656.233 - 14715.811: 98.7626% ( 5) 00:12:19.945 14715.811 - 14775.389: 98.7897% ( 3) 00:12:19.945 14775.389 - 14834.967: 98.8168% ( 3) 00:12:19.945 14834.967 - 14894.545: 98.8439% ( 3) 00:12:19.945 18230.924 - 18350.080: 98.8620% ( 2) 00:12:19.945 18350.080 - 18469.236: 98.8801% ( 2) 00:12:19.945 18469.236 - 18588.393: 98.9072% ( 3) 00:12:19.945 18588.393 - 18707.549: 98.9342% ( 3) 00:12:19.945 18707.549 - 18826.705: 98.9523% ( 2) 00:12:19.945 18826.705 - 18945.862: 98.9884% ( 4) 00:12:19.945 18945.862 - 19065.018: 99.0155% ( 3) 00:12:19.945 19065.018 - 19184.175: 99.0426% ( 3) 00:12:19.945 19184.175 - 19303.331: 99.0697% ( 3) 00:12:19.945 19303.331 - 19422.487: 99.0968% ( 3) 00:12:19.945 19422.487 - 19541.644: 99.1239% ( 3) 00:12:19.945 19541.644 - 19660.800: 99.1510% ( 3) 00:12:19.945 19660.800 - 19779.956: 99.1691% ( 2) 00:12:19.945 19779.956 - 19899.113: 99.1962% ( 3) 00:12:19.945 19899.113 - 20018.269: 99.2233% ( 3) 00:12:19.945 20018.269 - 20137.425: 99.2504% ( 3) 00:12:19.945 20137.425 - 20256.582: 99.2775% ( 3) 00:12:19.945 20256.582 - 20375.738: 99.3046% ( 3) 00:12:19.945 20375.738 - 20494.895: 99.3316% ( 3) 00:12:19.945 20494.895 - 20614.051: 99.3587% ( 3) 00:12:19.945 20614.051 - 20733.207: 99.3768% ( 2) 00:12:19.945 20733.207 - 20852.364: 99.4039% ( 3) 00:12:19.945 20852.364 - 20971.520: 99.4220% ( 2) 00:12:19.945 25976.087 - 26095.244: 99.4400% ( 2) 00:12:19.945 26095.244 - 26214.400: 99.4671% ( 3) 00:12:19.945 26214.400 - 26333.556: 99.4942% ( 3) 00:12:19.945 26333.556 - 26452.713: 99.5213% ( 3) 00:12:19.945 26452.713 - 26571.869: 99.5394% ( 2) 00:12:19.945 26571.869 - 26691.025: 99.5665% ( 3) 00:12:19.945 26691.025 - 26810.182: 99.5936% ( 3) 00:12:19.945 26810.182 - 26929.338: 99.6116% ( 2) 00:12:19.945 26929.338 - 27048.495: 99.6387% ( 3) 00:12:19.945 27048.495 - 27167.651: 99.6658% ( 3) 00:12:19.945 27167.651 - 27286.807: 99.7020% ( 4) 00:12:19.945 27286.807 - 27405.964: 99.7200% ( 2) 00:12:19.945 27405.964 - 27525.120: 99.7471% ( 3) 00:12:19.945 27525.120 - 27644.276: 99.7742% ( 3) 00:12:19.945 27644.276 - 27763.433: 99.8013% ( 3) 00:12:19.945 27763.433 - 27882.589: 99.8194% ( 2) 00:12:19.945 27882.589 - 28001.745: 99.8465% ( 3) 00:12:19.945 28001.745 - 28120.902: 99.8736% ( 3) 00:12:19.945 28120.902 - 28240.058: 99.9007% ( 3) 00:12:19.945 28240.058 - 28359.215: 99.9277% ( 3) 00:12:19.945 28359.215 - 28478.371: 99.9548% ( 3) 00:12:19.945 28478.371 - 28597.527: 99.9819% ( 3) 00:12:19.945 28597.527 - 28716.684: 100.0000% ( 2) 00:12:19.945 00:12:19.945 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:19.945 ============================================================================== 00:12:19.945 Range in us Cumulative IO count 00:12:19.945 9770.822 - 9830.400: 0.0542% ( 6) 00:12:19.945 9830.400 - 9889.978: 0.1264% ( 8) 00:12:19.945 9889.978 - 9949.556: 0.2258% ( 11) 00:12:19.945 9949.556 - 10009.135: 0.3161% ( 10) 00:12:19.945 10009.135 - 10068.713: 0.4787% ( 18) 00:12:19.945 10068.713 - 10128.291: 1.0567% ( 64) 00:12:19.945 10128.291 - 10187.869: 1.5083% ( 50) 00:12:19.945 10187.869 - 10247.447: 2.0683% ( 62) 00:12:19.945 10247.447 - 10307.025: 2.8360% ( 85) 00:12:19.945 10307.025 - 10366.604: 4.0553% ( 135) 00:12:19.945 10366.604 - 10426.182: 5.8887% ( 203) 00:12:19.945 10426.182 - 10485.760: 8.0022% ( 234) 00:12:19.945 10485.760 - 10545.338: 10.4949% ( 276) 00:12:19.945 10545.338 - 10604.916: 13.5206% ( 335) 00:12:19.945 10604.916 - 10664.495: 17.0340% ( 389) 00:12:19.945 10664.495 - 10724.073: 20.3306% ( 365) 00:12:19.946 10724.073 - 10783.651: 24.4220% ( 453) 00:12:19.946 10783.651 - 10843.229: 28.2605% ( 425) 00:12:19.946 10843.229 - 10902.807: 32.4422% ( 463) 00:12:19.946 10902.807 - 10962.385: 37.3103% ( 539) 00:12:19.946 10962.385 - 11021.964: 41.3204% ( 444) 00:12:19.946 11021.964 - 11081.542: 45.0867% ( 417) 00:12:19.946 11081.542 - 11141.120: 49.0336% ( 437) 00:12:19.946 11141.120 - 11200.698: 52.8631% ( 424) 00:12:19.946 11200.698 - 11260.276: 56.5661% ( 410) 00:12:19.946 11260.276 - 11319.855: 59.8176% ( 360) 00:12:19.946 11319.855 - 11379.433: 62.9787% ( 350) 00:12:19.946 11379.433 - 11439.011: 65.8418% ( 317) 00:12:19.946 11439.011 - 11498.589: 68.5965% ( 305) 00:12:19.946 11498.589 - 11558.167: 70.7551% ( 239) 00:12:19.946 11558.167 - 11617.745: 72.8414% ( 231) 00:12:19.946 11617.745 - 11677.324: 74.7381% ( 210) 00:12:19.946 11677.324 - 11736.902: 76.6980% ( 217) 00:12:19.946 11736.902 - 11796.480: 78.3689% ( 185) 00:12:19.946 11796.480 - 11856.058: 80.1572% ( 198) 00:12:19.946 11856.058 - 11915.636: 81.3674% ( 134) 00:12:19.946 11915.636 - 11975.215: 82.5054% ( 126) 00:12:19.946 11975.215 - 12034.793: 83.5170% ( 112) 00:12:19.946 12034.793 - 12094.371: 84.5195% ( 111) 00:12:19.946 12094.371 - 12153.949: 85.4588% ( 104) 00:12:19.946 12153.949 - 12213.527: 86.0910% ( 70) 00:12:19.946 12213.527 - 12273.105: 86.7504% ( 73) 00:12:19.946 12273.105 - 12332.684: 87.3013% ( 61) 00:12:19.946 12332.684 - 12392.262: 87.8884% ( 65) 00:12:19.946 12392.262 - 12451.840: 88.7283% ( 93) 00:12:19.946 12451.840 - 12511.418: 89.0806% ( 39) 00:12:19.946 12511.418 - 12570.996: 89.4599% ( 42) 00:12:19.946 12570.996 - 12630.575: 90.0108% ( 61) 00:12:19.946 12630.575 - 12690.153: 90.6160% ( 67) 00:12:19.946 12690.153 - 12749.731: 91.0314% ( 46) 00:12:19.946 12749.731 - 12809.309: 91.6004% ( 63) 00:12:19.946 12809.309 - 12868.887: 92.0611% ( 51) 00:12:19.946 12868.887 - 12928.465: 92.4133% ( 39) 00:12:19.946 12928.465 - 12988.044: 92.8017% ( 43) 00:12:19.946 12988.044 - 13047.622: 93.2171% ( 46) 00:12:19.946 13047.622 - 13107.200: 93.9487% ( 81) 00:12:19.946 13107.200 - 13166.778: 94.3913% ( 49) 00:12:19.946 13166.778 - 13226.356: 94.7525% ( 40) 00:12:19.946 13226.356 - 13285.935: 95.0957% ( 38) 00:12:19.946 13285.935 - 13345.513: 95.2944% ( 22) 00:12:19.946 13345.513 - 13405.091: 95.4931% ( 22) 00:12:19.946 13405.091 - 13464.669: 95.8725% ( 42) 00:12:19.946 13464.669 - 13524.247: 96.0531% ( 20) 00:12:19.946 13524.247 - 13583.825: 96.2157% ( 18) 00:12:19.946 13583.825 - 13643.404: 96.4595% ( 27) 00:12:19.946 13643.404 - 13702.982: 96.7486% ( 32) 00:12:19.946 13702.982 - 13762.560: 96.9292% ( 20) 00:12:19.946 13762.560 - 13822.138: 97.0195% ( 10) 00:12:19.946 13822.138 - 13881.716: 97.1098% ( 10) 00:12:19.946 13881.716 - 13941.295: 97.1911% ( 9) 00:12:19.946 13941.295 - 14000.873: 97.4079% ( 24) 00:12:19.946 14000.873 - 14060.451: 97.5614% ( 17) 00:12:19.946 14060.451 - 14120.029: 97.6517% ( 10) 00:12:19.946 14120.029 - 14179.607: 97.7782% ( 14) 00:12:19.946 14179.607 - 14239.185: 97.8866% ( 12) 00:12:19.946 14239.185 - 14298.764: 97.9678% ( 9) 00:12:19.946 14298.764 - 14358.342: 98.0401% ( 8) 00:12:19.946 14358.342 - 14417.920: 98.1665% ( 14) 00:12:19.946 14417.920 - 14477.498: 98.2388% ( 8) 00:12:19.946 14477.498 - 14537.076: 98.2930% ( 6) 00:12:19.946 14537.076 - 14596.655: 98.3382% ( 5) 00:12:19.946 14596.655 - 14656.233: 98.3833% ( 5) 00:12:19.946 14656.233 - 14715.811: 98.4194% ( 4) 00:12:19.946 14715.811 - 14775.389: 98.4736% ( 6) 00:12:19.946 14775.389 - 14834.967: 98.5278% ( 6) 00:12:19.946 14834.967 - 14894.545: 98.6272% ( 11) 00:12:19.946 14894.545 - 14954.124: 98.7085% ( 9) 00:12:19.946 14954.124 - 15013.702: 98.7897% ( 9) 00:12:19.946 15013.702 - 15073.280: 98.8168% ( 3) 00:12:19.946 15073.280 - 15132.858: 98.8259% ( 1) 00:12:19.946 15132.858 - 15192.436: 98.8439% ( 2) 00:12:19.946 15192.436 - 15252.015: 98.8710% ( 3) 00:12:19.946 15252.015 - 15371.171: 98.8981% ( 3) 00:12:19.946 15371.171 - 15490.327: 98.9252% ( 3) 00:12:19.946 15490.327 - 15609.484: 98.9523% ( 3) 00:12:19.946 15609.484 - 15728.640: 98.9704% ( 2) 00:12:19.946 15728.640 - 15847.796: 98.9975% ( 3) 00:12:19.946 15847.796 - 15966.953: 99.0246% ( 3) 00:12:19.946 15966.953 - 16086.109: 99.0517% ( 3) 00:12:19.946 16086.109 - 16205.265: 99.0788% ( 3) 00:12:19.946 16205.265 - 16324.422: 99.1059% ( 3) 00:12:19.946 16324.422 - 16443.578: 99.1329% ( 3) 00:12:19.946 16443.578 - 16562.735: 99.1600% ( 3) 00:12:19.946 16562.735 - 16681.891: 99.1871% ( 3) 00:12:19.946 16681.891 - 16801.047: 99.2142% ( 3) 00:12:19.946 16801.047 - 16920.204: 99.2413% ( 3) 00:12:19.946 16920.204 - 17039.360: 99.2684% ( 3) 00:12:19.946 17039.360 - 17158.516: 99.2955% ( 3) 00:12:19.946 17158.516 - 17277.673: 99.3226% ( 3) 00:12:19.946 17277.673 - 17396.829: 99.3497% ( 3) 00:12:19.946 17396.829 - 17515.985: 99.3768% ( 3) 00:12:19.946 17515.985 - 17635.142: 99.3949% ( 2) 00:12:19.946 17635.142 - 17754.298: 99.4220% ( 3) 00:12:19.946 22758.865 - 22878.022: 99.4310% ( 1) 00:12:19.946 22878.022 - 22997.178: 99.4581% ( 3) 00:12:19.946 22997.178 - 23116.335: 99.4852% ( 3) 00:12:19.946 23116.335 - 23235.491: 99.5123% ( 3) 00:12:19.946 23235.491 - 23354.647: 99.5394% ( 3) 00:12:19.946 23354.647 - 23473.804: 99.5665% ( 3) 00:12:19.946 23473.804 - 23592.960: 99.5845% ( 2) 00:12:19.946 23592.960 - 23712.116: 99.6207% ( 4) 00:12:19.946 23712.116 - 23831.273: 99.6478% ( 3) 00:12:19.946 23831.273 - 23950.429: 99.6749% ( 3) 00:12:19.946 23950.429 - 24069.585: 99.7020% ( 3) 00:12:19.946 24069.585 - 24188.742: 99.7290% ( 3) 00:12:19.946 24188.742 - 24307.898: 99.7561% ( 3) 00:12:19.946 24307.898 - 24427.055: 99.7832% ( 3) 00:12:19.946 24427.055 - 24546.211: 99.8013% ( 2) 00:12:19.946 24546.211 - 24665.367: 99.8284% ( 3) 00:12:19.946 24665.367 - 24784.524: 99.8555% ( 3) 00:12:19.946 24784.524 - 24903.680: 99.8826% ( 3) 00:12:19.946 24903.680 - 25022.836: 99.9097% ( 3) 00:12:19.946 25022.836 - 25141.993: 99.9368% ( 3) 00:12:19.946 25141.993 - 25261.149: 99.9639% ( 3) 00:12:19.946 25261.149 - 25380.305: 99.9910% ( 3) 00:12:19.946 25380.305 - 25499.462: 100.0000% ( 1) 00:12:19.946 00:12:19.946 ************************************ 00:12:19.946 END TEST nvme_perf 00:12:19.946 ************************************ 00:12:19.946 07:48:42 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:19.946 00:12:19.946 real 0m2.856s 00:12:19.946 user 0m2.347s 00:12:19.946 sys 0m0.385s 00:12:19.946 07:48:42 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.946 07:48:42 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:19.946 07:48:42 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:19.946 07:48:42 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:19.946 07:48:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.946 07:48:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.946 ************************************ 00:12:19.946 START TEST nvme_hello_world 00:12:19.946 ************************************ 00:12:19.946 07:48:42 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:20.221 Initializing NVMe Controllers 00:12:20.221 Attached to 0000:00:10.0 00:12:20.221 Namespace ID: 1 size: 6GB 00:12:20.221 Attached to 0000:00:11.0 00:12:20.221 Namespace ID: 1 size: 5GB 00:12:20.221 Attached to 0000:00:13.0 00:12:20.221 Namespace ID: 1 size: 1GB 00:12:20.221 Attached to 0000:00:12.0 00:12:20.221 Namespace ID: 1 size: 4GB 00:12:20.221 Namespace ID: 2 size: 4GB 00:12:20.221 Namespace ID: 3 size: 4GB 00:12:20.221 Initialization complete. 00:12:20.221 INFO: using host memory buffer for IO 00:12:20.221 Hello world! 00:12:20.221 INFO: using host memory buffer for IO 00:12:20.221 Hello world! 00:12:20.221 INFO: using host memory buffer for IO 00:12:20.221 Hello world! 00:12:20.221 INFO: using host memory buffer for IO 00:12:20.221 Hello world! 00:12:20.221 INFO: using host memory buffer for IO 00:12:20.221 Hello world! 00:12:20.221 INFO: using host memory buffer for IO 00:12:20.221 Hello world! 00:12:20.221 ************************************ 00:12:20.221 END TEST nvme_hello_world 00:12:20.221 ************************************ 00:12:20.221 00:12:20.221 real 0m0.417s 00:12:20.221 user 0m0.184s 00:12:20.221 sys 0m0.178s 00:12:20.221 07:48:42 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.221 07:48:42 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:20.502 07:48:42 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:20.502 07:48:42 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:20.502 07:48:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.502 07:48:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.502 ************************************ 00:12:20.502 START TEST nvme_sgl 00:12:20.502 ************************************ 00:12:20.502 07:48:42 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:20.762 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:20.762 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:20.762 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:20.762 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:20.762 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:20.762 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:20.762 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:20.762 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:20.762 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:20.762 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:20.762 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:20.762 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:20.762 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:20.762 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:20.762 NVMe Readv/Writev Request test 00:12:20.762 Attached to 0000:00:10.0 00:12:20.762 Attached to 0000:00:11.0 00:12:20.762 Attached to 0000:00:13.0 00:12:20.762 Attached to 0000:00:12.0 00:12:20.762 0000:00:10.0: build_io_request_2 test passed 00:12:20.762 0000:00:10.0: build_io_request_4 test passed 00:12:20.762 0000:00:10.0: build_io_request_5 test passed 00:12:20.763 0000:00:10.0: build_io_request_6 test passed 00:12:20.763 0000:00:10.0: build_io_request_7 test passed 00:12:20.763 0000:00:10.0: build_io_request_10 test passed 00:12:20.763 0000:00:11.0: build_io_request_2 test passed 00:12:20.763 0000:00:11.0: build_io_request_4 test passed 00:12:20.763 0000:00:11.0: build_io_request_5 test passed 00:12:20.763 0000:00:11.0: build_io_request_6 test passed 00:12:20.763 0000:00:11.0: build_io_request_7 test passed 00:12:20.763 0000:00:11.0: build_io_request_10 test passed 00:12:20.763 Cleaning up... 00:12:20.763 ************************************ 00:12:20.763 END TEST nvme_sgl 00:12:20.763 ************************************ 00:12:20.763 00:12:20.763 real 0m0.451s 00:12:20.763 user 0m0.233s 00:12:20.763 sys 0m0.169s 00:12:20.763 07:48:43 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:20.763 07:48:43 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 07:48:43 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:20.763 07:48:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:20.763 07:48:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:20.763 07:48:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:20.763 ************************************ 00:12:20.763 START TEST nvme_e2edp 00:12:20.763 ************************************ 00:12:20.763 07:48:43 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:21.329 NVMe Write/Read with End-to-End data protection test 00:12:21.329 Attached to 0000:00:10.0 00:12:21.329 Attached to 0000:00:11.0 00:12:21.329 Attached to 0000:00:13.0 00:12:21.329 Attached to 0000:00:12.0 00:12:21.329 Cleaning up... 00:12:21.329 ************************************ 00:12:21.329 END TEST nvme_e2edp 00:12:21.329 ************************************ 00:12:21.329 00:12:21.329 real 0m0.335s 00:12:21.329 user 0m0.130s 00:12:21.329 sys 0m0.161s 00:12:21.329 07:48:43 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.329 07:48:43 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:21.329 07:48:43 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:21.329 07:48:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.329 07:48:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.329 07:48:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.329 ************************************ 00:12:21.329 START TEST nvme_reserve 00:12:21.329 ************************************ 00:12:21.329 07:48:43 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:21.588 ===================================================== 00:12:21.588 NVMe Controller at PCI bus 0, device 16, function 0 00:12:21.588 ===================================================== 00:12:21.588 Reservations: Not Supported 00:12:21.588 ===================================================== 00:12:21.588 NVMe Controller at PCI bus 0, device 17, function 0 00:12:21.588 ===================================================== 00:12:21.588 Reservations: Not Supported 00:12:21.588 ===================================================== 00:12:21.588 NVMe Controller at PCI bus 0, device 19, function 0 00:12:21.588 ===================================================== 00:12:21.588 Reservations: Not Supported 00:12:21.588 ===================================================== 00:12:21.588 NVMe Controller at PCI bus 0, device 18, function 0 00:12:21.588 ===================================================== 00:12:21.588 Reservations: Not Supported 00:12:21.588 Reservation test passed 00:12:21.588 ************************************ 00:12:21.588 END TEST nvme_reserve 00:12:21.588 ************************************ 00:12:21.588 00:12:21.588 real 0m0.334s 00:12:21.588 user 0m0.122s 00:12:21.588 sys 0m0.167s 00:12:21.588 07:48:44 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.588 07:48:44 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:21.588 07:48:44 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:21.588 07:48:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.588 07:48:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.588 07:48:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.588 ************************************ 00:12:21.588 START TEST nvme_err_injection 00:12:21.588 ************************************ 00:12:21.588 07:48:44 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:22.155 NVMe Error Injection test 00:12:22.155 Attached to 0000:00:10.0 00:12:22.155 Attached to 0000:00:11.0 00:12:22.155 Attached to 0000:00:13.0 00:12:22.155 Attached to 0000:00:12.0 00:12:22.155 0000:00:10.0: get features failed as expected 00:12:22.155 0000:00:11.0: get features failed as expected 00:12:22.155 0000:00:13.0: get features failed as expected 00:12:22.155 0000:00:12.0: get features failed as expected 00:12:22.155 0000:00:13.0: get features successfully as expected 00:12:22.155 0000:00:12.0: get features successfully as expected 00:12:22.155 0000:00:10.0: get features successfully as expected 00:12:22.155 0000:00:11.0: get features successfully as expected 00:12:22.155 0000:00:10.0: read failed as expected 00:12:22.155 0000:00:11.0: read failed as expected 00:12:22.155 0000:00:13.0: read failed as expected 00:12:22.155 0000:00:12.0: read failed as expected 00:12:22.155 0000:00:12.0: read successfully as expected 00:12:22.155 0000:00:10.0: read successfully as expected 00:12:22.155 0000:00:11.0: read successfully as expected 00:12:22.155 0000:00:13.0: read successfully as expected 00:12:22.155 Cleaning up... 00:12:22.155 ************************************ 00:12:22.155 END TEST nvme_err_injection 00:12:22.155 ************************************ 00:12:22.155 00:12:22.155 real 0m0.352s 00:12:22.155 user 0m0.144s 00:12:22.155 sys 0m0.161s 00:12:22.155 07:48:44 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.155 07:48:44 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:22.155 07:48:44 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:22.155 07:48:44 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:12:22.155 07:48:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.155 07:48:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:22.155 ************************************ 00:12:22.155 START TEST nvme_overhead 00:12:22.155 ************************************ 00:12:22.155 07:48:44 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:23.531 Initializing NVMe Controllers 00:12:23.531 Attached to 0000:00:10.0 00:12:23.531 Attached to 0000:00:11.0 00:12:23.531 Attached to 0000:00:13.0 00:12:23.531 Attached to 0000:00:12.0 00:12:23.531 Initialization complete. Launching workers. 00:12:23.531 submit (in ns) avg, min, max = 16304.1, 12880.0, 83910.0 00:12:23.531 complete (in ns) avg, min, max = 10654.6, 9185.5, 71921.4 00:12:23.531 00:12:23.531 Submit histogram 00:12:23.531 ================ 00:12:23.531 Range in us Cumulative Count 00:12:23.531 12.858 - 12.916: 0.0104% ( 1) 00:12:23.531 13.847 - 13.905: 0.0209% ( 1) 00:12:23.531 14.196 - 14.255: 0.0313% ( 1) 00:12:23.531 14.255 - 14.313: 0.0627% ( 3) 00:12:23.531 14.313 - 14.371: 0.2507% ( 18) 00:12:23.531 14.371 - 14.429: 1.0446% ( 76) 00:12:23.531 14.429 - 14.487: 2.3608% ( 126) 00:12:23.531 14.487 - 14.545: 4.7738% ( 231) 00:12:23.531 14.545 - 14.604: 8.3568% ( 343) 00:12:23.531 14.604 - 14.662: 12.7024% ( 416) 00:12:23.531 14.662 - 14.720: 18.1030% ( 517) 00:12:23.531 14.720 - 14.778: 25.3526% ( 694) 00:12:23.531 14.778 - 14.836: 32.0067% ( 637) 00:12:23.531 14.836 - 14.895: 38.3474% ( 607) 00:12:23.531 14.895 - 15.011: 49.3262% ( 1051) 00:12:23.531 15.011 - 15.127: 56.0117% ( 640) 00:12:23.531 15.127 - 15.244: 61.0989% ( 487) 00:12:23.531 15.244 - 15.360: 64.5983% ( 335) 00:12:23.531 15.360 - 15.476: 66.7816% ( 209) 00:12:23.531 15.476 - 15.593: 68.3485% ( 150) 00:12:23.531 15.593 - 15.709: 69.4244% ( 103) 00:12:23.531 15.709 - 15.825: 70.3332% ( 87) 00:12:23.531 15.825 - 15.942: 70.9495% ( 59) 00:12:23.531 15.942 - 16.058: 71.4718% ( 50) 00:12:23.531 16.058 - 16.175: 71.9733% ( 48) 00:12:23.531 16.175 - 16.291: 72.4120% ( 42) 00:12:23.531 16.291 - 16.407: 72.6627% ( 24) 00:12:23.531 16.407 - 16.524: 72.9343% ( 26) 00:12:23.531 16.524 - 16.640: 73.1119% ( 17) 00:12:23.531 16.640 - 16.756: 73.2372% ( 12) 00:12:23.531 16.756 - 16.873: 73.3626% ( 12) 00:12:23.531 16.873 - 16.989: 73.4566% ( 9) 00:12:23.531 16.989 - 17.105: 73.5715% ( 11) 00:12:23.531 17.105 - 17.222: 73.6551% ( 8) 00:12:23.531 17.222 - 17.338: 73.7177% ( 6) 00:12:23.531 17.338 - 17.455: 73.7595% ( 4) 00:12:23.531 17.455 - 17.571: 73.8535% ( 9) 00:12:23.531 17.571 - 17.687: 75.0235% ( 112) 00:12:23.531 17.687 - 17.804: 79.0348% ( 384) 00:12:23.531 17.804 - 17.920: 83.2550% ( 404) 00:12:23.531 17.920 - 18.036: 85.8874% ( 252) 00:12:23.531 18.036 - 18.153: 87.4752% ( 152) 00:12:23.531 18.153 - 18.269: 88.7601% ( 123) 00:12:23.531 18.269 - 18.385: 89.4286% ( 64) 00:12:23.531 18.385 - 18.502: 89.8778% ( 43) 00:12:23.531 18.502 - 18.618: 90.2852% ( 39) 00:12:23.531 18.618 - 18.735: 90.6403% ( 34) 00:12:23.531 18.735 - 18.851: 90.7970% ( 15) 00:12:23.531 18.851 - 18.967: 90.9851% ( 18) 00:12:23.531 18.967 - 19.084: 91.1418% ( 15) 00:12:23.531 19.084 - 19.200: 91.3820% ( 23) 00:12:23.531 19.200 - 19.316: 91.6118% ( 22) 00:12:23.531 19.316 - 19.433: 91.7999% ( 18) 00:12:23.531 19.433 - 19.549: 91.9461% ( 14) 00:12:23.531 19.549 - 19.665: 92.0715% ( 12) 00:12:23.531 19.665 - 19.782: 92.1341% ( 6) 00:12:23.531 19.782 - 19.898: 92.2281% ( 9) 00:12:23.531 19.898 - 20.015: 92.3222% ( 9) 00:12:23.531 20.015 - 20.131: 92.3848% ( 6) 00:12:23.531 20.131 - 20.247: 92.4057% ( 2) 00:12:23.531 20.247 - 20.364: 92.4684% ( 6) 00:12:23.531 20.364 - 20.480: 92.5520% ( 8) 00:12:23.531 20.480 - 20.596: 92.6251% ( 7) 00:12:23.531 20.596 - 20.713: 92.7400% ( 11) 00:12:23.531 20.713 - 20.829: 92.8236% ( 8) 00:12:23.531 20.829 - 20.945: 92.9385% ( 11) 00:12:23.531 20.945 - 21.062: 93.0638% ( 12) 00:12:23.531 21.062 - 21.178: 93.1996% ( 13) 00:12:23.531 21.178 - 21.295: 93.2936% ( 9) 00:12:23.531 21.295 - 21.411: 93.4085% ( 11) 00:12:23.531 21.411 - 21.527: 93.5235% ( 11) 00:12:23.531 21.527 - 21.644: 93.6906% ( 16) 00:12:23.531 21.644 - 21.760: 93.7637% ( 7) 00:12:23.531 21.760 - 21.876: 93.8891% ( 12) 00:12:23.531 21.876 - 21.993: 94.0458% ( 15) 00:12:23.531 21.993 - 22.109: 94.1816% ( 13) 00:12:23.531 22.109 - 22.225: 94.3487% ( 16) 00:12:23.531 22.225 - 22.342: 94.4531% ( 10) 00:12:23.531 22.342 - 22.458: 94.6934% ( 23) 00:12:23.531 22.458 - 22.575: 94.7665% ( 7) 00:12:23.531 22.575 - 22.691: 94.8814% ( 11) 00:12:23.531 22.691 - 22.807: 95.0590% ( 17) 00:12:23.531 22.807 - 22.924: 95.1635% ( 10) 00:12:23.531 22.924 - 23.040: 95.2993% ( 13) 00:12:23.531 23.040 - 23.156: 95.4351% ( 13) 00:12:23.531 23.156 - 23.273: 95.5395% ( 10) 00:12:23.531 23.273 - 23.389: 95.6649% ( 12) 00:12:23.531 23.389 - 23.505: 95.7589% ( 9) 00:12:23.531 23.505 - 23.622: 95.9051% ( 14) 00:12:23.531 23.622 - 23.738: 96.0201% ( 11) 00:12:23.531 23.738 - 23.855: 96.1559% ( 13) 00:12:23.531 23.855 - 23.971: 96.2290% ( 7) 00:12:23.531 23.971 - 24.087: 96.2917% ( 6) 00:12:23.531 24.087 - 24.204: 96.3543% ( 6) 00:12:23.532 24.204 - 24.320: 96.4483% ( 9) 00:12:23.532 24.320 - 24.436: 96.5528% ( 10) 00:12:23.532 24.436 - 24.553: 96.6886% ( 13) 00:12:23.532 24.553 - 24.669: 96.7722% ( 8) 00:12:23.532 24.669 - 24.785: 96.8766% ( 10) 00:12:23.532 24.785 - 24.902: 96.9393% ( 6) 00:12:23.532 24.902 - 25.018: 97.0438% ( 10) 00:12:23.532 25.018 - 25.135: 97.2005% ( 15) 00:12:23.532 25.135 - 25.251: 97.3258% ( 12) 00:12:23.532 25.251 - 25.367: 97.4825% ( 15) 00:12:23.532 25.367 - 25.484: 97.6287% ( 14) 00:12:23.532 25.484 - 25.600: 97.7854% ( 15) 00:12:23.532 25.600 - 25.716: 97.8481% ( 6) 00:12:23.532 25.716 - 25.833: 97.9735% ( 12) 00:12:23.532 25.833 - 25.949: 98.0466% ( 7) 00:12:23.532 25.949 - 26.065: 98.1824% ( 13) 00:12:23.532 26.065 - 26.182: 98.2868% ( 10) 00:12:23.532 26.182 - 26.298: 98.4018% ( 11) 00:12:23.532 26.298 - 26.415: 98.4958% ( 9) 00:12:23.532 26.415 - 26.531: 98.5898% ( 9) 00:12:23.532 26.531 - 26.647: 98.6525% ( 6) 00:12:23.532 26.647 - 26.764: 98.6942% ( 4) 00:12:23.532 26.764 - 26.880: 98.7569% ( 6) 00:12:23.532 26.880 - 26.996: 98.8509% ( 9) 00:12:23.532 26.996 - 27.113: 98.9136% ( 6) 00:12:23.532 27.113 - 27.229: 98.9345% ( 2) 00:12:23.532 27.229 - 27.345: 98.9763% ( 4) 00:12:23.532 27.345 - 27.462: 98.9867% ( 1) 00:12:23.532 27.462 - 27.578: 99.0599% ( 7) 00:12:23.532 27.578 - 27.695: 99.1121% ( 5) 00:12:23.532 27.695 - 27.811: 99.1225% ( 1) 00:12:23.532 27.811 - 27.927: 99.1643% ( 4) 00:12:23.532 27.927 - 28.044: 99.1852% ( 2) 00:12:23.532 28.044 - 28.160: 99.2165% ( 3) 00:12:23.532 28.276 - 28.393: 99.2270% ( 1) 00:12:23.532 28.393 - 28.509: 99.2374% ( 1) 00:12:23.532 28.509 - 28.625: 99.2479% ( 1) 00:12:23.532 28.625 - 28.742: 99.2897% ( 4) 00:12:23.532 28.742 - 28.858: 99.3001% ( 1) 00:12:23.532 28.975 - 29.091: 99.3210% ( 2) 00:12:23.532 29.091 - 29.207: 99.3419% ( 2) 00:12:23.532 29.207 - 29.324: 99.3628% ( 2) 00:12:23.532 29.440 - 29.556: 99.3732% ( 1) 00:12:23.532 29.556 - 29.673: 99.3837% ( 1) 00:12:23.532 29.673 - 29.789: 99.4046% ( 2) 00:12:23.532 29.789 - 30.022: 99.4150% ( 1) 00:12:23.532 30.022 - 30.255: 99.4255% ( 1) 00:12:23.532 30.255 - 30.487: 99.4359% ( 1) 00:12:23.532 30.487 - 30.720: 99.4464% ( 1) 00:12:23.532 30.720 - 30.953: 99.4673% ( 2) 00:12:23.532 30.953 - 31.185: 99.4986% ( 3) 00:12:23.532 31.185 - 31.418: 99.5299% ( 3) 00:12:23.532 31.418 - 31.651: 99.5613% ( 3) 00:12:23.532 31.651 - 31.884: 99.6031% ( 4) 00:12:23.532 31.884 - 32.116: 99.6135% ( 1) 00:12:23.532 32.116 - 32.349: 99.6239% ( 1) 00:12:23.532 32.349 - 32.582: 99.6344% ( 1) 00:12:23.532 32.582 - 32.815: 99.6448% ( 1) 00:12:23.532 32.815 - 33.047: 99.6553% ( 1) 00:12:23.532 33.280 - 33.513: 99.6762% ( 2) 00:12:23.532 33.513 - 33.745: 99.6866% ( 1) 00:12:23.532 34.444 - 34.676: 99.6971% ( 1) 00:12:23.532 34.909 - 35.142: 99.7180% ( 2) 00:12:23.532 35.607 - 35.840: 99.7388% ( 2) 00:12:23.532 35.840 - 36.073: 99.7702% ( 3) 00:12:23.532 36.073 - 36.305: 99.7911% ( 2) 00:12:23.532 36.538 - 36.771: 99.8015% ( 1) 00:12:23.532 37.004 - 37.236: 99.8120% ( 1) 00:12:23.532 37.236 - 37.469: 99.8224% ( 1) 00:12:23.532 37.469 - 37.702: 99.8329% ( 1) 00:12:23.532 38.167 - 38.400: 99.8642% ( 3) 00:12:23.532 38.865 - 39.098: 99.8746% ( 1) 00:12:23.532 39.331 - 39.564: 99.8851% ( 1) 00:12:23.532 40.262 - 40.495: 99.8955% ( 1) 00:12:23.532 40.495 - 40.727: 99.9060% ( 1) 00:12:23.532 40.727 - 40.960: 99.9164% ( 1) 00:12:23.532 42.124 - 42.356: 99.9269% ( 1) 00:12:23.532 42.589 - 42.822: 99.9373% ( 1) 00:12:23.532 43.287 - 43.520: 99.9478% ( 1) 00:12:23.532 43.753 - 43.985: 99.9582% ( 1) 00:12:23.532 46.778 - 47.011: 99.9687% ( 1) 00:12:23.532 47.476 - 47.709: 99.9791% ( 1) 00:12:23.532 48.175 - 48.407: 99.9896% ( 1) 00:12:23.532 83.782 - 84.247: 100.0000% ( 1) 00:12:23.532 00:12:23.532 Complete histogram 00:12:23.532 ================== 00:12:23.532 Range in us Cumulative Count 00:12:23.532 9.135 - 9.193: 0.0313% ( 3) 00:12:23.532 9.193 - 9.251: 0.1254% ( 9) 00:12:23.532 9.251 - 9.309: 1.0133% ( 85) 00:12:23.532 9.309 - 9.367: 4.4187% ( 326) 00:12:23.532 9.367 - 9.425: 11.9712% ( 723) 00:12:23.532 9.425 - 9.484: 22.9291% ( 1049) 00:12:23.532 9.484 - 9.542: 34.7435% ( 1131) 00:12:23.532 9.542 - 9.600: 44.4897% ( 933) 00:12:23.532 9.600 - 9.658: 52.3556% ( 753) 00:12:23.532 9.658 - 9.716: 57.7353% ( 515) 00:12:23.532 9.716 - 9.775: 60.9527% ( 308) 00:12:23.532 9.775 - 9.833: 63.0106% ( 197) 00:12:23.532 9.833 - 9.891: 64.2745% ( 121) 00:12:23.532 9.891 - 9.949: 64.9326% ( 63) 00:12:23.532 9.949 - 10.007: 65.4863% ( 53) 00:12:23.532 10.007 - 10.065: 65.8623% ( 36) 00:12:23.532 10.065 - 10.124: 66.3637% ( 48) 00:12:23.532 10.124 - 10.182: 67.1054% ( 71) 00:12:23.532 10.182 - 10.240: 67.9411% ( 80) 00:12:23.532 10.240 - 10.298: 68.7872% ( 81) 00:12:23.532 10.298 - 10.356: 69.6647% ( 84) 00:12:23.532 10.356 - 10.415: 70.4795% ( 78) 00:12:23.532 10.415 - 10.473: 71.1062% ( 60) 00:12:23.532 10.473 - 10.531: 71.7539% ( 62) 00:12:23.532 10.531 - 10.589: 72.2031% ( 43) 00:12:23.532 10.589 - 10.647: 72.5478% ( 33) 00:12:23.532 10.647 - 10.705: 72.7776% ( 22) 00:12:23.532 10.705 - 10.764: 72.8925% ( 11) 00:12:23.532 10.764 - 10.822: 73.0910% ( 19) 00:12:23.532 10.822 - 10.880: 73.1641% ( 7) 00:12:23.532 10.880 - 10.938: 73.2581% ( 9) 00:12:23.532 10.938 - 10.996: 73.3104% ( 5) 00:12:23.532 10.996 - 11.055: 73.3521% ( 4) 00:12:23.532 11.055 - 11.113: 73.3835% ( 3) 00:12:23.532 11.113 - 11.171: 73.4670% ( 8) 00:12:23.532 11.171 - 11.229: 73.4984% ( 3) 00:12:23.532 11.229 - 11.287: 73.5402% ( 4) 00:12:23.532 11.287 - 11.345: 73.5924% ( 5) 00:12:23.532 11.345 - 11.404: 73.6028% ( 1) 00:12:23.532 11.404 - 11.462: 73.6342% ( 3) 00:12:23.532 11.462 - 11.520: 73.6864% ( 5) 00:12:23.532 11.520 - 11.578: 73.7177% ( 3) 00:12:23.532 11.578 - 11.636: 73.7909% ( 7) 00:12:23.532 11.636 - 11.695: 74.2923% ( 48) 00:12:23.532 11.695 - 11.753: 76.1099% ( 174) 00:12:23.532 11.753 - 11.811: 79.5675% ( 331) 00:12:23.532 11.811 - 11.869: 83.3072% ( 358) 00:12:23.532 11.869 - 11.927: 86.0023% ( 258) 00:12:23.532 11.927 - 11.985: 87.7259% ( 165) 00:12:23.532 11.985 - 12.044: 88.7496% ( 98) 00:12:23.532 12.044 - 12.102: 89.2615% ( 49) 00:12:23.532 12.102 - 12.160: 89.6793% ( 40) 00:12:23.532 12.160 - 12.218: 89.9613% ( 27) 00:12:23.532 12.218 - 12.276: 90.1807% ( 21) 00:12:23.532 12.276 - 12.335: 90.3270% ( 14) 00:12:23.532 12.335 - 12.393: 90.5045% ( 17) 00:12:23.532 12.393 - 12.451: 90.8284% ( 31) 00:12:23.532 12.451 - 12.509: 91.2044% ( 36) 00:12:23.532 12.509 - 12.567: 91.6432% ( 42) 00:12:23.532 12.567 - 12.625: 91.9148% ( 26) 00:12:23.532 12.625 - 12.684: 92.2699% ( 34) 00:12:23.532 12.684 - 12.742: 92.5624% ( 28) 00:12:23.532 12.742 - 12.800: 92.7922% ( 22) 00:12:23.532 12.800 - 12.858: 92.9594% ( 16) 00:12:23.532 12.858 - 12.916: 93.0847% ( 12) 00:12:23.532 12.916 - 12.975: 93.3041% ( 21) 00:12:23.532 12.975 - 13.033: 93.4294% ( 12) 00:12:23.532 13.033 - 13.091: 93.5130% ( 8) 00:12:23.532 13.091 - 13.149: 93.5861% ( 7) 00:12:23.532 13.149 - 13.207: 93.6175% ( 3) 00:12:23.532 13.207 - 13.265: 93.6801% ( 6) 00:12:23.532 13.265 - 13.324: 93.7115% ( 3) 00:12:23.532 13.324 - 13.382: 93.7742% ( 6) 00:12:23.532 13.382 - 13.440: 93.8159% ( 4) 00:12:23.532 13.440 - 13.498: 93.8577% ( 4) 00:12:23.532 13.498 - 13.556: 93.8786% ( 2) 00:12:23.532 13.556 - 13.615: 93.9413% ( 6) 00:12:23.532 13.615 - 13.673: 93.9831% ( 4) 00:12:23.532 13.673 - 13.731: 94.0144% ( 3) 00:12:23.532 13.731 - 13.789: 94.0458% ( 3) 00:12:23.532 13.789 - 13.847: 94.0771% ( 3) 00:12:23.532 13.847 - 13.905: 94.1084% ( 3) 00:12:23.532 13.905 - 13.964: 94.1502% ( 4) 00:12:23.532 13.964 - 14.022: 94.1816% ( 3) 00:12:23.532 14.022 - 14.080: 94.1920% ( 1) 00:12:23.532 14.080 - 14.138: 94.2233% ( 3) 00:12:23.532 14.138 - 14.196: 94.2860% ( 6) 00:12:23.532 14.196 - 14.255: 94.3069% ( 2) 00:12:23.532 14.255 - 14.313: 94.3382% ( 3) 00:12:23.532 14.313 - 14.371: 94.3800% ( 4) 00:12:23.532 14.371 - 14.429: 94.4323% ( 5) 00:12:23.532 14.429 - 14.487: 94.4740% ( 4) 00:12:23.532 14.487 - 14.545: 94.5054% ( 3) 00:12:23.532 14.545 - 14.604: 94.5576% ( 5) 00:12:23.532 14.604 - 14.662: 94.5994% ( 4) 00:12:23.532 14.662 - 14.720: 94.6725% ( 7) 00:12:23.532 14.720 - 14.778: 94.7456% ( 7) 00:12:23.532 14.778 - 14.836: 94.8292% ( 8) 00:12:23.532 14.836 - 14.895: 94.8501% ( 2) 00:12:23.532 14.895 - 15.011: 94.9441% ( 9) 00:12:23.532 15.011 - 15.127: 94.9963% ( 5) 00:12:23.532 15.127 - 15.244: 95.0277% ( 3) 00:12:23.532 15.244 - 15.360: 95.1113% ( 8) 00:12:23.532 15.360 - 15.476: 95.1635% ( 5) 00:12:23.532 15.476 - 15.593: 95.2470% ( 8) 00:12:23.533 15.593 - 15.709: 95.3097% ( 6) 00:12:23.533 15.709 - 15.825: 95.4455% ( 13) 00:12:23.533 15.825 - 15.942: 95.5395% ( 9) 00:12:23.533 15.942 - 16.058: 95.6440% ( 10) 00:12:23.533 16.058 - 16.175: 95.7380% ( 9) 00:12:23.533 16.175 - 16.291: 95.8529% ( 11) 00:12:23.533 16.291 - 16.407: 95.9783% ( 12) 00:12:23.533 16.407 - 16.524: 96.0827% ( 10) 00:12:23.533 16.524 - 16.640: 96.1976% ( 11) 00:12:23.533 16.640 - 16.756: 96.2499% ( 5) 00:12:23.533 16.756 - 16.873: 96.3857% ( 13) 00:12:23.533 16.873 - 16.989: 96.4483% ( 6) 00:12:23.533 16.989 - 17.105: 96.5633% ( 11) 00:12:23.533 17.105 - 17.222: 96.6259% ( 6) 00:12:23.533 17.222 - 17.338: 96.7513% ( 12) 00:12:23.533 17.338 - 17.455: 96.8244% ( 7) 00:12:23.533 17.455 - 17.571: 96.9289% ( 10) 00:12:23.533 17.571 - 17.687: 97.0542% ( 12) 00:12:23.533 17.687 - 17.804: 97.1482% ( 9) 00:12:23.533 17.804 - 17.920: 97.2736% ( 12) 00:12:23.533 17.920 - 18.036: 97.3989% ( 12) 00:12:23.533 18.036 - 18.153: 97.4929% ( 9) 00:12:23.533 18.153 - 18.269: 97.5452% ( 5) 00:12:23.533 18.269 - 18.385: 97.6392% ( 9) 00:12:23.533 18.385 - 18.502: 97.6914% ( 5) 00:12:23.533 18.502 - 18.618: 97.7750% ( 8) 00:12:23.533 18.618 - 18.735: 97.9108% ( 13) 00:12:23.533 18.735 - 18.851: 97.9735% ( 6) 00:12:23.533 18.851 - 18.967: 98.0257% ( 5) 00:12:23.533 18.967 - 19.084: 98.1406% ( 11) 00:12:23.533 19.084 - 19.200: 98.2242% ( 8) 00:12:23.533 19.200 - 19.316: 98.3286% ( 10) 00:12:23.533 19.316 - 19.433: 98.4226% ( 9) 00:12:23.533 19.433 - 19.549: 98.4749% ( 5) 00:12:23.533 19.549 - 19.665: 98.6107% ( 13) 00:12:23.533 19.665 - 19.782: 98.6420% ( 3) 00:12:23.533 19.782 - 19.898: 98.6942% ( 5) 00:12:23.533 19.898 - 20.015: 98.7778% ( 8) 00:12:23.533 20.015 - 20.131: 98.9032% ( 12) 00:12:23.533 20.131 - 20.247: 98.9658% ( 6) 00:12:23.533 20.247 - 20.364: 99.0181% ( 5) 00:12:23.533 20.364 - 20.480: 99.0494% ( 3) 00:12:23.533 20.480 - 20.596: 99.0912% ( 4) 00:12:23.533 20.596 - 20.713: 99.1225% ( 3) 00:12:23.533 20.713 - 20.829: 99.1852% ( 6) 00:12:23.533 20.829 - 20.945: 99.2061% ( 2) 00:12:23.533 20.945 - 21.062: 99.2270% ( 2) 00:12:23.533 21.062 - 21.178: 99.2479% ( 2) 00:12:23.533 21.178 - 21.295: 99.3001% ( 5) 00:12:23.533 21.295 - 21.411: 99.3315% ( 3) 00:12:23.533 21.411 - 21.527: 99.3837% ( 5) 00:12:23.533 21.527 - 21.644: 99.3941% ( 1) 00:12:23.533 21.644 - 21.760: 99.4046% ( 1) 00:12:23.533 21.760 - 21.876: 99.4359% ( 3) 00:12:23.533 21.876 - 21.993: 99.4673% ( 3) 00:12:23.533 21.993 - 22.109: 99.4881% ( 2) 00:12:23.533 22.109 - 22.225: 99.5195% ( 3) 00:12:23.533 22.225 - 22.342: 99.5299% ( 1) 00:12:23.533 22.342 - 22.458: 99.5508% ( 2) 00:12:23.533 22.575 - 22.691: 99.5717% ( 2) 00:12:23.533 22.691 - 22.807: 99.5822% ( 1) 00:12:23.533 22.807 - 22.924: 99.6031% ( 2) 00:12:23.533 23.040 - 23.156: 99.6344% ( 3) 00:12:23.533 23.156 - 23.273: 99.6448% ( 1) 00:12:23.533 23.738 - 23.855: 99.6657% ( 2) 00:12:23.533 23.855 - 23.971: 99.6762% ( 1) 00:12:23.533 23.971 - 24.087: 99.6866% ( 1) 00:12:23.533 24.436 - 24.553: 99.6971% ( 1) 00:12:23.533 24.553 - 24.669: 99.7075% ( 1) 00:12:23.533 24.669 - 24.785: 99.7180% ( 1) 00:12:23.533 24.785 - 24.902: 99.7284% ( 1) 00:12:23.533 25.484 - 25.600: 99.7388% ( 1) 00:12:23.533 25.833 - 25.949: 99.7493% ( 1) 00:12:23.533 25.949 - 26.065: 99.7597% ( 1) 00:12:23.533 26.065 - 26.182: 99.7702% ( 1) 00:12:23.533 26.298 - 26.415: 99.7806% ( 1) 00:12:23.533 26.764 - 26.880: 99.7911% ( 1) 00:12:23.533 27.811 - 27.927: 99.8015% ( 1) 00:12:23.533 27.927 - 28.044: 99.8224% ( 2) 00:12:23.533 28.276 - 28.393: 99.8329% ( 1) 00:12:23.533 28.625 - 28.742: 99.8433% ( 1) 00:12:23.533 28.858 - 28.975: 99.8538% ( 1) 00:12:23.533 28.975 - 29.091: 99.8642% ( 1) 00:12:23.533 29.789 - 30.022: 99.8746% ( 1) 00:12:23.533 32.349 - 32.582: 99.8851% ( 1) 00:12:23.533 33.047 - 33.280: 99.8955% ( 1) 00:12:23.533 33.978 - 34.211: 99.9060% ( 1) 00:12:23.533 34.211 - 34.444: 99.9164% ( 1) 00:12:23.533 34.909 - 35.142: 99.9269% ( 1) 00:12:23.533 36.538 - 36.771: 99.9373% ( 1) 00:12:23.533 38.633 - 38.865: 99.9478% ( 1) 00:12:23.533 42.124 - 42.356: 99.9582% ( 1) 00:12:23.533 43.055 - 43.287: 99.9687% ( 1) 00:12:23.533 43.287 - 43.520: 99.9791% ( 1) 00:12:23.533 71.215 - 71.680: 99.9896% ( 1) 00:12:23.533 71.680 - 72.145: 100.0000% ( 1) 00:12:23.533 00:12:23.533 ************************************ 00:12:23.533 END TEST nvme_overhead 00:12:23.533 ************************************ 00:12:23.533 00:12:23.533 real 0m1.345s 00:12:23.533 user 0m1.111s 00:12:23.533 sys 0m0.176s 00:12:23.533 07:48:45 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.533 07:48:45 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:23.533 07:48:45 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:23.533 07:48:45 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:12:23.533 07:48:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.533 07:48:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.533 ************************************ 00:12:23.533 START TEST nvme_arbitration 00:12:23.533 ************************************ 00:12:23.533 07:48:45 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:26.818 Initializing NVMe Controllers 00:12:26.818 Attached to 0000:00:10.0 00:12:26.818 Attached to 0000:00:11.0 00:12:26.818 Attached to 0000:00:13.0 00:12:26.818 Attached to 0000:00:12.0 00:12:26.818 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:26.818 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:26.818 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:26.818 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:26.818 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:26.818 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:26.818 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:26.818 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:26.818 Initialization complete. Launching workers. 00:12:26.818 Starting thread on core 1 with urgent priority queue 00:12:26.818 Starting thread on core 2 with urgent priority queue 00:12:26.818 Starting thread on core 3 with urgent priority queue 00:12:26.818 Starting thread on core 0 with urgent priority queue 00:12:26.818 QEMU NVMe Ctrl (12340 ) core 0: 640.00 IO/s 156.25 secs/100000 ios 00:12:26.818 QEMU NVMe Ctrl (12342 ) core 0: 640.00 IO/s 156.25 secs/100000 ios 00:12:26.818 QEMU NVMe Ctrl (12341 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:12:26.818 QEMU NVMe Ctrl (12342 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:12:26.818 QEMU NVMe Ctrl (12343 ) core 2: 469.33 IO/s 213.07 secs/100000 ios 00:12:26.818 QEMU NVMe Ctrl (12342 ) core 3: 661.33 IO/s 151.21 secs/100000 ios 00:12:26.818 ======================================================== 00:12:26.818 00:12:27.076 ************************************ 00:12:27.076 END TEST nvme_arbitration 00:12:27.076 ************************************ 00:12:27.076 00:12:27.076 real 0m3.514s 00:12:27.076 user 0m9.559s 00:12:27.076 sys 0m0.190s 00:12:27.076 07:48:49 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.076 07:48:49 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:27.076 07:48:49 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:27.076 07:48:49 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:27.076 07:48:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.076 07:48:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.076 ************************************ 00:12:27.076 START TEST nvme_single_aen 00:12:27.076 ************************************ 00:12:27.076 07:48:49 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:27.340 Asynchronous Event Request test 00:12:27.340 Attached to 0000:00:10.0 00:12:27.340 Attached to 0000:00:11.0 00:12:27.340 Attached to 0000:00:13.0 00:12:27.340 Attached to 0000:00:12.0 00:12:27.340 Reset controller to setup AER completions for this process 00:12:27.340 Registering asynchronous event callbacks... 00:12:27.340 Getting orig temperature thresholds of all controllers 00:12:27.340 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:27.340 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:27.340 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:27.340 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:27.340 Setting all controllers temperature threshold low to trigger AER 00:12:27.340 Waiting for all controllers temperature threshold to be set lower 00:12:27.340 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:27.340 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:27.340 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:27.340 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:27.340 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:27.340 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:27.340 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:27.340 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:27.340 Waiting for all controllers to trigger AER and reset threshold 00:12:27.340 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:27.340 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:27.340 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:27.340 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:27.340 Cleaning up... 00:12:27.340 00:12:27.340 real 0m0.358s 00:12:27.340 user 0m0.147s 00:12:27.340 sys 0m0.161s 00:12:27.340 07:48:49 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.340 07:48:49 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:27.340 ************************************ 00:12:27.340 END TEST nvme_single_aen 00:12:27.340 ************************************ 00:12:27.340 07:48:49 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:27.340 07:48:49 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:27.340 07:48:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.340 07:48:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.340 ************************************ 00:12:27.340 START TEST nvme_doorbell_aers 00:12:27.340 ************************************ 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1494 -- # bdfs=() 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1494 -- # local bdfs 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:12:27.340 07:48:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:27.599 07:48:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:12:27.599 07:48:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:27.599 07:48:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:27.599 07:48:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:27.859 [2024-11-06 07:48:50.308789] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:12:37.834 Executing: test_write_invalid_db 00:12:37.834 Waiting for AER completion... 00:12:37.834 Failure: test_write_invalid_db 00:12:37.834 00:12:37.834 Executing: test_invalid_db_write_overflow_sq 00:12:37.834 Waiting for AER completion... 00:12:37.834 Failure: test_invalid_db_write_overflow_sq 00:12:37.834 00:12:37.834 Executing: test_invalid_db_write_overflow_cq 00:12:37.834 Waiting for AER completion... 00:12:37.834 Failure: test_invalid_db_write_overflow_cq 00:12:37.834 00:12:37.834 07:49:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:37.834 07:49:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:37.834 [2024-11-06 07:49:00.367699] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:12:47.811 Executing: test_write_invalid_db 00:12:47.811 Waiting for AER completion... 00:12:47.811 Failure: test_write_invalid_db 00:12:47.811 00:12:47.811 Executing: test_invalid_db_write_overflow_sq 00:12:47.811 Waiting for AER completion... 00:12:47.811 Failure: test_invalid_db_write_overflow_sq 00:12:47.811 00:12:47.811 Executing: test_invalid_db_write_overflow_cq 00:12:47.811 Waiting for AER completion... 00:12:47.811 Failure: test_invalid_db_write_overflow_cq 00:12:47.811 00:12:47.811 07:49:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:47.811 07:49:10 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:47.811 [2024-11-06 07:49:10.414442] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:12:57.827 Executing: test_write_invalid_db 00:12:57.827 Waiting for AER completion... 00:12:57.827 Failure: test_write_invalid_db 00:12:57.827 00:12:57.827 Executing: test_invalid_db_write_overflow_sq 00:12:57.827 Waiting for AER completion... 00:12:57.827 Failure: test_invalid_db_write_overflow_sq 00:12:57.827 00:12:57.827 Executing: test_invalid_db_write_overflow_cq 00:12:57.827 Waiting for AER completion... 00:12:57.827 Failure: test_invalid_db_write_overflow_cq 00:12:57.827 00:12:57.827 07:49:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:57.827 07:49:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:57.827 [2024-11-06 07:49:20.422574] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:07.798 Executing: test_write_invalid_db 00:13:07.798 Waiting for AER completion... 00:13:07.798 Failure: test_write_invalid_db 00:13:07.798 00:13:07.798 Executing: test_invalid_db_write_overflow_sq 00:13:07.798 Waiting for AER completion... 00:13:07.798 Failure: test_invalid_db_write_overflow_sq 00:13:07.798 00:13:07.798 Executing: test_invalid_db_write_overflow_cq 00:13:07.798 Waiting for AER completion... 00:13:07.798 Failure: test_invalid_db_write_overflow_cq 00:13:07.798 00:13:07.798 00:13:07.798 real 0m40.277s 00:13:07.798 user 0m34.213s 00:13:07.798 sys 0m5.645s 00:13:07.798 07:49:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.798 07:49:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:13:07.798 ************************************ 00:13:07.798 END TEST nvme_doorbell_aers 00:13:07.798 ************************************ 00:13:07.798 07:49:30 nvme -- nvme/nvme.sh@97 -- # uname 00:13:07.798 07:49:30 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:13:07.798 07:49:30 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:07.798 07:49:30 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:13:07.798 07:49:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.798 07:49:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.798 ************************************ 00:13:07.798 START TEST nvme_multi_aen 00:13:07.798 ************************************ 00:13:07.798 07:49:30 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:13:08.057 [2024-11-06 07:49:30.547012] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.547147] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.547192] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.549548] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.549613] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.549638] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.551455] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.551536] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.551560] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.553391] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.553451] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 [2024-11-06 07:49:30.553474] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64834) is not found. Dropping the request. 00:13:08.057 Child process pid: 65356 00:13:08.317 [Child] Asynchronous Event Request test 00:13:08.317 [Child] Attached to 0000:00:10.0 00:13:08.317 [Child] Attached to 0000:00:11.0 00:13:08.317 [Child] Attached to 0000:00:13.0 00:13:08.317 [Child] Attached to 0000:00:12.0 00:13:08.317 [Child] Registering asynchronous event callbacks... 00:13:08.317 [Child] Getting orig temperature thresholds of all controllers 00:13:08.317 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.317 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.317 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.317 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.317 [Child] Waiting for all controllers to trigger AER and reset threshold 00:13:08.317 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.317 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.317 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.317 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.317 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.317 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.317 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.317 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.317 [Child] Cleaning up... 00:13:08.317 Asynchronous Event Request test 00:13:08.317 Attached to 0000:00:10.0 00:13:08.317 Attached to 0000:00:11.0 00:13:08.317 Attached to 0000:00:13.0 00:13:08.317 Attached to 0000:00:12.0 00:13:08.317 Reset controller to setup AER completions for this process 00:13:08.317 Registering asynchronous event callbacks... 00:13:08.317 Getting orig temperature thresholds of all controllers 00:13:08.317 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.317 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.317 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.317 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:08.317 Setting all controllers temperature threshold low to trigger AER 00:13:08.317 Waiting for all controllers temperature threshold to be set lower 00:13:08.317 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.317 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:13:08.317 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.317 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:13:08.317 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.317 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:13:08.317 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:08.317 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:13:08.317 Waiting for all controllers to trigger AER and reset threshold 00:13:08.317 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.317 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.317 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.317 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:08.317 Cleaning up... 00:13:08.317 00:13:08.317 real 0m0.702s 00:13:08.317 user 0m0.270s 00:13:08.317 sys 0m0.307s 00:13:08.317 07:49:30 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.317 07:49:30 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:13:08.317 ************************************ 00:13:08.317 END TEST nvme_multi_aen 00:13:08.317 ************************************ 00:13:08.575 07:49:30 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:08.575 07:49:30 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:08.575 07:49:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.575 07:49:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.575 ************************************ 00:13:08.575 START TEST nvme_startup 00:13:08.575 ************************************ 00:13:08.575 07:49:30 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:13:08.834 Initializing NVMe Controllers 00:13:08.834 Attached to 0000:00:10.0 00:13:08.834 Attached to 0000:00:11.0 00:13:08.834 Attached to 0000:00:13.0 00:13:08.834 Attached to 0000:00:12.0 00:13:08.834 Initialization complete. 00:13:08.834 Time used:222023.266 (us). 00:13:08.834 00:13:08.834 real 0m0.322s 00:13:08.834 user 0m0.111s 00:13:08.834 sys 0m0.159s 00:13:08.834 07:49:31 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.834 ************************************ 00:13:08.834 END TEST nvme_startup 00:13:08.834 07:49:31 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:13:08.834 ************************************ 00:13:08.834 07:49:31 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:13:08.834 07:49:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:08.834 07:49:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.834 07:49:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.834 ************************************ 00:13:08.834 START TEST nvme_multi_secondary 00:13:08.834 ************************************ 00:13:08.834 07:49:31 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:13:08.834 07:49:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65412 00:13:08.834 07:49:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:13:08.834 07:49:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65413 00:13:08.834 07:49:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:13:08.834 07:49:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:12.119 Initializing NVMe Controllers 00:13:12.119 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:12.119 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:12.119 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:12.119 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:12.119 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:12.119 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:12.119 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:12.119 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:12.119 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:12.119 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:12.119 Initialization complete. Launching workers. 00:13:12.119 ======================================================== 00:13:12.119 Latency(us) 00:13:12.119 Device Information : IOPS MiB/s Average min max 00:13:12.119 PCIE (0000:00:10.0) NSID 1 from core 2: 2325.74 9.08 6869.50 1180.41 13800.04 00:13:12.119 PCIE (0000:00:11.0) NSID 1 from core 2: 2325.74 9.08 6870.03 1226.94 14315.12 00:13:12.119 PCIE (0000:00:13.0) NSID 1 from core 2: 2325.74 9.08 6869.91 1297.34 16521.71 00:13:12.119 PCIE (0000:00:12.0) NSID 1 from core 2: 2325.74 9.08 6868.99 1370.74 16786.14 00:13:12.119 PCIE (0000:00:12.0) NSID 2 from core 2: 2325.74 9.08 6869.36 1273.30 14606.71 00:13:12.119 PCIE (0000:00:12.0) NSID 3 from core 2: 2325.74 9.08 6869.52 1274.91 13623.00 00:13:12.119 ======================================================== 00:13:12.119 Total : 13954.46 54.51 6869.55 1180.41 16786.14 00:13:12.119 00:13:12.377 07:49:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65412 00:13:12.377 Initializing NVMe Controllers 00:13:12.377 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:12.377 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:12.377 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:12.377 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:12.377 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:12.377 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:12.377 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:12.377 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:12.377 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:12.377 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:12.377 Initialization complete. Launching workers. 00:13:12.377 ======================================================== 00:13:12.377 Latency(us) 00:13:12.377 Device Information : IOPS MiB/s Average min max 00:13:12.377 PCIE (0000:00:10.0) NSID 1 from core 1: 5060.73 19.77 3159.48 1217.05 7284.92 00:13:12.377 PCIE (0000:00:11.0) NSID 1 from core 1: 5060.73 19.77 3160.76 1236.29 7190.50 00:13:12.377 PCIE (0000:00:13.0) NSID 1 from core 1: 5060.73 19.77 3160.59 1234.08 6883.31 00:13:12.377 PCIE (0000:00:12.0) NSID 1 from core 1: 5060.73 19.77 3160.28 1229.24 6966.42 00:13:12.377 PCIE (0000:00:12.0) NSID 2 from core 1: 5060.73 19.77 3159.97 1233.64 7303.25 00:13:12.377 PCIE (0000:00:12.0) NSID 3 from core 1: 5060.73 19.77 3159.63 1239.14 7098.21 00:13:12.377 ======================================================== 00:13:12.377 Total : 30364.35 118.61 3160.12 1217.05 7303.25 00:13:12.377 00:13:14.908 Initializing NVMe Controllers 00:13:14.908 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:14.908 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:14.908 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:14.908 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:14.908 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:14.908 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:14.908 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:14.908 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:14.908 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:14.908 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:14.908 Initialization complete. Launching workers. 00:13:14.908 ======================================================== 00:13:14.908 Latency(us) 00:13:14.908 Device Information : IOPS MiB/s Average min max 00:13:14.908 PCIE (0000:00:10.0) NSID 1 from core 0: 7963.17 31.11 2007.60 926.49 14126.36 00:13:14.908 PCIE (0000:00:11.0) NSID 1 from core 0: 7963.17 31.11 2008.74 963.38 14301.73 00:13:14.908 PCIE (0000:00:13.0) NSID 1 from core 0: 7963.17 31.11 2008.66 939.37 14571.81 00:13:14.908 PCIE (0000:00:12.0) NSID 1 from core 0: 7963.17 31.11 2008.57 881.62 14727.78 00:13:14.908 PCIE (0000:00:12.0) NSID 2 from core 0: 7963.17 31.11 2008.51 833.78 15216.26 00:13:14.908 PCIE (0000:00:12.0) NSID 3 from core 0: 7963.17 31.11 2008.44 769.92 14947.75 00:13:14.908 ======================================================== 00:13:14.908 Total : 47779.03 186.64 2008.42 769.92 15216.26 00:13:14.908 00:13:14.908 07:49:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65413 00:13:14.908 07:49:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65482 00:13:14.908 07:49:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:13:14.908 07:49:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65483 00:13:14.908 07:49:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:13:14.908 07:49:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:13:18.189 Initializing NVMe Controllers 00:13:18.189 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:18.189 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:18.189 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:18.189 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:18.189 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:13:18.189 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:13:18.189 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:13:18.189 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:13:18.189 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:13:18.189 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:13:18.189 Initialization complete. Launching workers. 00:13:18.189 ======================================================== 00:13:18.189 Latency(us) 00:13:18.189 Device Information : IOPS MiB/s Average min max 00:13:18.189 PCIE (0000:00:10.0) NSID 1 from core 1: 5600.75 21.88 2854.98 1009.73 9023.99 00:13:18.189 PCIE (0000:00:11.0) NSID 1 from core 1: 5600.75 21.88 2856.39 1044.28 9414.02 00:13:18.189 PCIE (0000:00:13.0) NSID 1 from core 1: 5600.75 21.88 2856.38 1043.90 10129.86 00:13:18.189 PCIE (0000:00:12.0) NSID 1 from core 1: 5600.75 21.88 2856.34 1040.98 10751.96 00:13:18.189 PCIE (0000:00:12.0) NSID 2 from core 1: 5600.75 21.88 2856.33 1032.17 10964.54 00:13:18.189 PCIE (0000:00:12.0) NSID 3 from core 1: 5606.08 21.90 2853.57 1051.47 8180.98 00:13:18.189 ======================================================== 00:13:18.189 Total : 33609.84 131.29 2855.67 1009.73 10964.54 00:13:18.189 00:13:18.189 Initializing NVMe Controllers 00:13:18.189 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:18.189 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:18.189 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:18.189 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:18.189 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:13:18.189 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:13:18.189 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:13:18.189 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:13:18.189 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:13:18.189 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:13:18.190 Initialization complete. Launching workers. 00:13:18.190 ======================================================== 00:13:18.190 Latency(us) 00:13:18.190 Device Information : IOPS MiB/s Average min max 00:13:18.190 PCIE (0000:00:10.0) NSID 1 from core 0: 5605.25 21.90 2852.66 1127.05 10010.46 00:13:18.190 PCIE (0000:00:11.0) NSID 1 from core 0: 5605.25 21.90 2853.87 1179.89 10933.06 00:13:18.190 PCIE (0000:00:13.0) NSID 1 from core 0: 5610.59 21.92 2851.02 1068.95 9645.05 00:13:18.190 PCIE (0000:00:12.0) NSID 1 from core 0: 5605.25 21.90 2853.60 1011.23 9854.61 00:13:18.190 PCIE (0000:00:12.0) NSID 2 from core 0: 5605.25 21.90 2853.48 917.99 9888.66 00:13:18.190 PCIE (0000:00:12.0) NSID 3 from core 0: 5610.59 21.92 2850.64 838.74 9852.42 00:13:18.190 ======================================================== 00:13:18.190 Total : 33642.20 131.41 2852.54 838.74 10933.06 00:13:18.190 00:13:20.098 Initializing NVMe Controllers 00:13:20.098 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:20.098 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:20.098 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:20.098 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:20.098 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:20.098 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:20.098 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:20.098 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:20.098 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:20.098 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:20.098 Initialization complete. Launching workers. 00:13:20.098 ======================================================== 00:13:20.098 Latency(us) 00:13:20.098 Device Information : IOPS MiB/s Average min max 00:13:20.098 PCIE (0000:00:10.0) NSID 1 from core 2: 3378.68 13.20 4733.67 967.92 15498.55 00:13:20.098 PCIE (0000:00:11.0) NSID 1 from core 2: 3378.68 13.20 4734.56 988.13 15473.63 00:13:20.098 PCIE (0000:00:13.0) NSID 1 from core 2: 3378.68 13.20 4733.90 1043.15 13729.70 00:13:20.098 PCIE (0000:00:12.0) NSID 1 from core 2: 3381.88 13.21 4729.99 1022.07 18464.70 00:13:20.098 PCIE (0000:00:12.0) NSID 2 from core 2: 3381.88 13.21 4730.07 955.04 14419.90 00:13:20.098 PCIE (0000:00:12.0) NSID 3 from core 2: 3381.88 13.21 4729.67 794.07 14253.32 00:13:20.098 ======================================================== 00:13:20.098 Total : 20281.65 79.23 4731.97 794.07 18464.70 00:13:20.098 00:13:20.098 07:49:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65482 00:13:20.098 07:49:42 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65483 00:13:20.098 00:13:20.098 real 0m11.007s 00:13:20.098 user 0m18.638s 00:13:20.098 sys 0m1.004s 00:13:20.098 07:49:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.098 07:49:42 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:20.098 ************************************ 00:13:20.098 END TEST nvme_multi_secondary 00:13:20.098 ************************************ 00:13:20.098 07:49:42 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:20.098 07:49:42 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:20.098 07:49:42 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/64408 ]] 00:13:20.098 07:49:42 nvme -- common/autotest_common.sh@1090 -- # kill 64408 00:13:20.098 07:49:42 nvme -- common/autotest_common.sh@1091 -- # wait 64408 00:13:20.098 [2024-11-06 07:49:42.409227] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.410213] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.410480] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.410660] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.413376] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.413537] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.413645] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.413744] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.416321] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.416503] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.416626] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.416757] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.419352] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.419510] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.098 [2024-11-06 07:49:42.419621] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.099 [2024-11-06 07:49:42.419741] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65349) is not found. Dropping the request. 00:13:20.099 07:49:42 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:13:20.099 07:49:42 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:13:20.099 07:49:42 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:20.099 07:49:42 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:20.099 07:49:42 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.099 07:49:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.099 ************************************ 00:13:20.099 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:20.099 ************************************ 00:13:20.099 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:20.099 * Looking for test storage... 00:13:20.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:20.099 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:20.099 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # lcov --version 00:13:20.099 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:20.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.357 --rc genhtml_branch_coverage=1 00:13:20.357 --rc genhtml_function_coverage=1 00:13:20.357 --rc genhtml_legend=1 00:13:20.357 --rc geninfo_all_blocks=1 00:13:20.357 --rc geninfo_unexecuted_blocks=1 00:13:20.357 00:13:20.357 ' 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:20.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.357 --rc genhtml_branch_coverage=1 00:13:20.357 --rc genhtml_function_coverage=1 00:13:20.357 --rc genhtml_legend=1 00:13:20.357 --rc geninfo_all_blocks=1 00:13:20.357 --rc geninfo_unexecuted_blocks=1 00:13:20.357 00:13:20.357 ' 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:20.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.357 --rc genhtml_branch_coverage=1 00:13:20.357 --rc genhtml_function_coverage=1 00:13:20.357 --rc genhtml_legend=1 00:13:20.357 --rc geninfo_all_blocks=1 00:13:20.357 --rc geninfo_unexecuted_blocks=1 00:13:20.357 00:13:20.357 ' 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:20.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.357 --rc genhtml_branch_coverage=1 00:13:20.357 --rc genhtml_function_coverage=1 00:13:20.357 --rc genhtml_legend=1 00:13:20.357 --rc geninfo_all_blocks=1 00:13:20.357 --rc geninfo_unexecuted_blocks=1 00:13:20.357 00:13:20.357 ' 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1505 -- # bdfs=() 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1505 -- # local bdfs 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1494 -- # bdfs=() 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1494 -- # local bdfs 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:13:20.357 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # echo 0000:00:10.0 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65645 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65645 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 65645 ']' 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.358 07:49:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:20.615 [2024-11-06 07:49:43.078715] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:13:20.615 [2024-11-06 07:49:43.078933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65645 ] 00:13:20.873 [2024-11-06 07:49:43.306211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.873 [2024-11-06 07:49:43.471482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.873 [2024-11-06 07:49:43.471602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.873 [2024-11-06 07:49:43.471754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.873 [2024-11-06 07:49:43.471941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.804 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.804 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:13:21.804 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:21.804 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.804 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:22.062 nvme0n1 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_930Eg.txt 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:22.062 true 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730879384 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65673 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:22.062 07:49:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:23.962 [2024-11-06 07:49:46.509951] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:23.962 [2024-11-06 07:49:46.510508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:23.962 [2024-11-06 07:49:46.510567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.962 [2024-11-06 07:49:46.510593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.962 [2024-11-06 07:49:46.512796] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:23.962 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65673 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65673 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65673 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:23.962 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_930Eg.txt 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_930Eg.txt 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65645 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 65645 ']' 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 65645 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65645 00:13:24.221 killing process with pid 65645 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65645' 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 65645 00:13:24.221 07:49:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 65645 00:13:26.752 07:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:26.752 07:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:26.752 ************************************ 00:13:26.752 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:26.752 ************************************ 00:13:26.752 00:13:26.752 real 0m6.332s 00:13:26.752 user 0m21.891s 00:13:26.752 sys 0m0.823s 00:13:26.752 07:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.752 07:49:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:26.752 07:49:49 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:26.752 07:49:49 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:26.752 07:49:49 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:26.752 07:49:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.752 07:49:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.752 ************************************ 00:13:26.752 START TEST nvme_fio 00:13:26.752 ************************************ 00:13:26.752 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:13:26.752 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:26.752 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:26.752 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:26.752 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1494 -- # bdfs=() 00:13:26.752 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1494 -- # local bdfs 00:13:26.752 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:26.752 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:13:26.752 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:26.752 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:13:26.752 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:26.752 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:26.752 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:26.752 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:26.752 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:26.752 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:27.021 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:27.021 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:27.279 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:27.279 07:49:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:27.279 07:49:49 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:27.536 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:27.536 fio-3.35 00:13:27.536 Starting 1 thread 00:13:30.820 00:13:30.820 test: (groupid=0, jobs=1): err= 0: pid=65827: Wed Nov 6 07:49:53 2024 00:13:30.820 read: IOPS=14.8k, BW=57.9MiB/s (60.8MB/s)(116MiB/2001msec) 00:13:30.820 slat (usec): min=4, max=572, avg= 6.95, stdev= 4.10 00:13:30.820 clat (usec): min=332, max=11161, avg=4289.91, stdev=760.20 00:13:30.820 lat (usec): min=339, max=11167, avg=4296.86, stdev=761.06 00:13:30.820 clat percentiles (usec): 00:13:30.820 | 1.00th=[ 3097], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3654], 00:13:30.820 | 30.00th=[ 3818], 40.00th=[ 4015], 50.00th=[ 4228], 60.00th=[ 4490], 00:13:30.820 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5276], 00:13:30.820 | 99.00th=[ 7242], 99.50th=[ 7832], 99.90th=[10290], 99.95th=[10421], 00:13:30.820 | 99.99th=[11076] 00:13:30.820 bw ( KiB/s): min=56832, max=64672, per=100.00%, avg=59794.67, stdev=4256.27, samples=3 00:13:30.820 iops : min=14208, max=16168, avg=14948.67, stdev=1064.07, samples=3 00:13:30.820 write: IOPS=14.8k, BW=58.0MiB/s (60.8MB/s)(116MiB/2001msec); 0 zone resets 00:13:30.820 slat (usec): min=4, max=463, avg= 7.14, stdev= 3.84 00:13:30.820 clat (usec): min=308, max=12508, avg=4305.63, stdev=781.91 00:13:30.820 lat (usec): min=315, max=12513, avg=4312.78, stdev=782.81 00:13:30.820 clat percentiles (usec): 00:13:30.820 | 1.00th=[ 3097], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3687], 00:13:30.820 | 30.00th=[ 3818], 40.00th=[ 4015], 50.00th=[ 4228], 60.00th=[ 4555], 00:13:30.820 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5342], 00:13:30.820 | 99.00th=[ 7373], 99.50th=[ 7963], 99.90th=[10290], 99.95th=[10552], 00:13:30.820 | 99.99th=[11207] 00:13:30.820 bw ( KiB/s): min=56272, max=65056, per=100.00%, avg=59541.33, stdev=4803.21, samples=3 00:13:30.820 iops : min=14068, max=16264, avg=14885.33, stdev=1200.80, samples=3 00:13:30.820 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:30.820 lat (msec) : 2=0.05%, 4=39.56%, 10=60.23%, 20=0.12% 00:13:30.820 cpu : usr=98.00%, sys=0.50%, ctx=28, majf=0, minf=607 00:13:30.820 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:30.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:30.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:30.820 issued rwts: total=29678,29695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:30.820 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:30.820 00:13:30.820 Run status group 0 (all jobs): 00:13:30.820 READ: bw=57.9MiB/s (60.8MB/s), 57.9MiB/s-57.9MiB/s (60.8MB/s-60.8MB/s), io=116MiB (122MB), run=2001-2001msec 00:13:30.820 WRITE: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=116MiB (122MB), run=2001-2001msec 00:13:30.820 ----------------------------------------------------- 00:13:30.820 Suppressions used: 00:13:30.820 count bytes template 00:13:30.820 1 32 /usr/src/fio/parse.c 00:13:30.820 1 8 libtcmalloc_minimal.so 00:13:30.820 ----------------------------------------------------- 00:13:30.820 00:13:30.820 07:49:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:30.820 07:49:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:30.820 07:49:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:30.820 07:49:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:31.090 07:49:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:31.090 07:49:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:31.348 07:49:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:31.348 07:49:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:31.348 07:49:53 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:31.606 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:31.606 fio-3.35 00:13:31.606 Starting 1 thread 00:13:34.893 00:13:34.893 test: (groupid=0, jobs=1): err= 0: pid=65893: Wed Nov 6 07:49:57 2024 00:13:34.893 read: IOPS=15.2k, BW=59.3MiB/s (62.1MB/s)(119MiB/2001msec) 00:13:34.893 slat (nsec): min=4710, max=52775, avg=7334.05, stdev=4281.98 00:13:34.893 clat (usec): min=316, max=15673, avg=4189.16, stdev=1112.02 00:13:34.893 lat (usec): min=324, max=15726, avg=4196.49, stdev=1115.28 00:13:34.893 clat percentiles (usec): 00:13:34.893 | 1.00th=[ 2966], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3589], 00:13:34.893 | 30.00th=[ 3654], 40.00th=[ 3752], 50.00th=[ 3851], 60.00th=[ 4015], 00:13:34.893 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 5932], 00:13:34.893 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[ 9503], 99.95th=[12518], 00:13:34.893 | 99.99th=[15270] 00:13:34.893 bw ( KiB/s): min=56544, max=63096, per=97.33%, avg=59053.33, stdev=3534.90, samples=3 00:13:34.893 iops : min=14136, max=15774, avg=14763.33, stdev=883.72, samples=3 00:13:34.893 write: IOPS=15.2k, BW=59.4MiB/s (62.2MB/s)(119MiB/2001msec); 0 zone resets 00:13:34.893 slat (nsec): min=4864, max=55935, avg=7536.86, stdev=4330.69 00:13:34.893 clat (usec): min=285, max=15340, avg=4204.50, stdev=1133.92 00:13:34.893 lat (usec): min=292, max=15366, avg=4212.04, stdev=1137.18 00:13:34.893 clat percentiles (usec): 00:13:34.893 | 1.00th=[ 2966], 5.00th=[ 3294], 10.00th=[ 3458], 20.00th=[ 3589], 00:13:34.893 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3884], 60.00th=[ 4047], 00:13:34.893 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4883], 95.00th=[ 6259], 00:13:34.893 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10028], 99.95th=[12780], 00:13:34.893 | 99.99th=[15008] 00:13:34.893 bw ( KiB/s): min=56848, max=62688, per=96.87%, avg=58885.33, stdev=3296.01, samples=3 00:13:34.893 iops : min=14212, max=15672, avg=14721.33, stdev=824.00, samples=3 00:13:34.893 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:34.893 lat (msec) : 2=0.05%, 4=58.61%, 10=41.20%, 20=0.10% 00:13:34.893 cpu : usr=98.85%, sys=0.15%, ctx=5, majf=0, minf=606 00:13:34.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:34.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:34.893 issued rwts: total=30352,30410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:34.893 00:13:34.893 Run status group 0 (all jobs): 00:13:34.893 READ: bw=59.3MiB/s (62.1MB/s), 59.3MiB/s-59.3MiB/s (62.1MB/s-62.1MB/s), io=119MiB (124MB), run=2001-2001msec 00:13:34.893 WRITE: bw=59.4MiB/s (62.2MB/s), 59.4MiB/s-59.4MiB/s (62.2MB/s-62.2MB/s), io=119MiB (125MB), run=2001-2001msec 00:13:34.893 ----------------------------------------------------- 00:13:34.893 Suppressions used: 00:13:34.893 count bytes template 00:13:34.893 1 32 /usr/src/fio/parse.c 00:13:34.893 1 8 libtcmalloc_minimal.so 00:13:34.893 ----------------------------------------------------- 00:13:34.893 00:13:34.893 07:49:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:34.893 07:49:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:34.893 07:49:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:34.893 07:49:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:35.151 07:49:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:35.152 07:49:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:35.719 07:49:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:35.719 07:49:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:35.719 07:49:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:35.719 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:35.719 fio-3.35 00:13:35.719 Starting 1 thread 00:13:39.019 00:13:39.019 test: (groupid=0, jobs=1): err= 0: pid=65954: Wed Nov 6 07:50:01 2024 00:13:39.019 read: IOPS=14.8k, BW=57.8MiB/s (60.7MB/s)(116MiB/2001msec) 00:13:39.019 slat (nsec): min=4753, max=62467, avg=7011.98, stdev=2325.44 00:13:39.019 clat (usec): min=328, max=9029, avg=4296.28, stdev=657.44 00:13:39.019 lat (usec): min=335, max=9081, avg=4303.29, stdev=658.35 00:13:39.019 clat percentiles (usec): 00:13:39.019 | 1.00th=[ 3392], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3687], 00:13:39.019 | 30.00th=[ 3785], 40.00th=[ 4015], 50.00th=[ 4359], 60.00th=[ 4490], 00:13:39.019 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5080], 00:13:39.019 | 99.00th=[ 6652], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7504], 00:13:39.019 | 99.99th=[ 8979] 00:13:39.019 bw ( KiB/s): min=57912, max=64072, per=100.00%, avg=60632.00, stdev=3142.48, samples=3 00:13:39.019 iops : min=14478, max=16018, avg=15158.00, stdev=785.62, samples=3 00:13:39.019 write: IOPS=14.8k, BW=57.9MiB/s (60.7MB/s)(116MiB/2001msec); 0 zone resets 00:13:39.019 slat (nsec): min=4985, max=65459, avg=7179.93, stdev=2384.34 00:13:39.019 clat (usec): min=412, max=8861, avg=4307.34, stdev=652.21 00:13:39.019 lat (usec): min=420, max=8880, avg=4314.52, stdev=653.07 00:13:39.019 clat percentiles (usec): 00:13:39.019 | 1.00th=[ 3392], 5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3720], 00:13:39.019 | 30.00th=[ 3785], 40.00th=[ 4047], 50.00th=[ 4359], 60.00th=[ 4490], 00:13:39.019 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5145], 00:13:39.019 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7570], 00:13:39.019 | 99.99th=[ 8717] 00:13:39.019 bw ( KiB/s): min=56904, max=63504, per=100.00%, avg=60234.67, stdev=3300.43, samples=3 00:13:39.019 iops : min=14226, max=15876, avg=15058.67, stdev=825.11, samples=3 00:13:39.019 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:39.019 lat (msec) : 2=0.08%, 4=39.16%, 10=60.72% 00:13:39.019 cpu : usr=99.10%, sys=0.00%, ctx=5, majf=0, minf=606 00:13:39.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:39.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:39.019 issued rwts: total=29634,29664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:39.019 00:13:39.019 Run status group 0 (all jobs): 00:13:39.019 READ: bw=57.8MiB/s (60.7MB/s), 57.8MiB/s-57.8MiB/s (60.7MB/s-60.7MB/s), io=116MiB (121MB), run=2001-2001msec 00:13:39.019 WRITE: bw=57.9MiB/s (60.7MB/s), 57.9MiB/s-57.9MiB/s (60.7MB/s-60.7MB/s), io=116MiB (122MB), run=2001-2001msec 00:13:39.277 ----------------------------------------------------- 00:13:39.277 Suppressions used: 00:13:39.277 count bytes template 00:13:39.277 1 32 /usr/src/fio/parse.c 00:13:39.277 1 8 libtcmalloc_minimal.so 00:13:39.277 ----------------------------------------------------- 00:13:39.277 00:13:39.277 07:50:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:39.277 07:50:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:39.277 07:50:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:39.277 07:50:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:39.534 07:50:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:39.534 07:50:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:39.793 07:50:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:39.793 07:50:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:39.793 07:50:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:40.052 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:40.052 fio-3.35 00:13:40.052 Starting 1 thread 00:13:44.239 00:13:44.239 test: (groupid=0, jobs=1): err= 0: pid=66020: Wed Nov 6 07:50:06 2024 00:13:44.239 read: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(115MiB/2001msec) 00:13:44.239 slat (nsec): min=5043, max=54814, avg=7147.58, stdev=2508.84 00:13:44.239 clat (usec): min=254, max=10967, avg=4310.62, stdev=696.35 00:13:44.239 lat (usec): min=261, max=11019, avg=4317.77, stdev=697.51 00:13:44.239 clat percentiles (usec): 00:13:44.239 | 1.00th=[ 3621], 5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3884], 00:13:44.239 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:13:44.239 | 70.00th=[ 4293], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5407], 00:13:44.239 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 8356], 99.95th=[ 8979], 00:13:44.239 | 99.99th=[10814] 00:13:44.239 bw ( KiB/s): min=54944, max=61848, per=97.59%, avg=57613.33, stdev=3708.64, samples=3 00:13:44.239 iops : min=13736, max=15462, avg=14403.33, stdev=927.16, samples=3 00:13:44.239 write: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(115MiB/2001msec); 0 zone resets 00:13:44.239 slat (nsec): min=5085, max=55620, avg=7306.40, stdev=2548.13 00:13:44.239 clat (usec): min=304, max=10734, avg=4324.34, stdev=698.60 00:13:44.239 lat (usec): min=311, max=10747, avg=4331.65, stdev=699.77 00:13:44.239 clat percentiles (usec): 00:13:44.239 | 1.00th=[ 3654], 5.00th=[ 3785], 10.00th=[ 3851], 20.00th=[ 3916], 00:13:44.239 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4146], 00:13:44.239 | 70.00th=[ 4359], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5407], 00:13:44.239 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 8356], 99.95th=[ 9110], 00:13:44.239 | 99.99th=[10552] 00:13:44.239 bw ( KiB/s): min=55288, max=61464, per=97.31%, avg=57501.33, stdev=3439.60, samples=3 00:13:44.239 iops : min=13822, max=15366, avg=14375.33, stdev=859.90, samples=3 00:13:44.239 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:44.239 lat (msec) : 2=0.06%, 4=39.65%, 10=60.23%, 20=0.03% 00:13:44.239 cpu : usr=98.85%, sys=0.15%, ctx=2, majf=0, minf=604 00:13:44.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:44.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.239 issued rwts: total=29533,29560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.239 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.239 00:13:44.239 Run status group 0 (all jobs): 00:13:44.239 READ: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=115MiB (121MB), run=2001-2001msec 00:13:44.239 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=115MiB (121MB), run=2001-2001msec 00:13:44.239 ----------------------------------------------------- 00:13:44.239 Suppressions used: 00:13:44.239 count bytes template 00:13:44.239 1 32 /usr/src/fio/parse.c 00:13:44.239 1 8 libtcmalloc_minimal.so 00:13:44.239 ----------------------------------------------------- 00:13:44.239 00:13:44.239 ************************************ 00:13:44.239 END TEST nvme_fio 00:13:44.239 ************************************ 00:13:44.239 07:50:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:44.239 07:50:06 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:44.239 00:13:44.239 real 0m17.726s 00:13:44.239 user 0m13.852s 00:13:44.239 sys 0m2.820s 00:13:44.239 07:50:06 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.239 07:50:06 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:44.239 ************************************ 00:13:44.239 END TEST nvme 00:13:44.239 ************************************ 00:13:44.239 00:13:44.239 real 1m33.115s 00:13:44.239 user 3m48.548s 00:13:44.239 sys 0m16.224s 00:13:44.239 07:50:06 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.239 07:50:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.239 07:50:06 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:44.239 07:50:06 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:44.239 07:50:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:44.240 07:50:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.240 07:50:06 -- common/autotest_common.sh@10 -- # set +x 00:13:44.240 ************************************ 00:13:44.240 START TEST nvme_scc 00:13:44.240 ************************************ 00:13:44.240 07:50:06 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:44.499 * Looking for test storage... 00:13:44.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:44.499 07:50:06 nvme_scc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:44.499 07:50:06 nvme_scc -- common/autotest_common.sh@1689 -- # lcov --version 00:13:44.499 07:50:06 nvme_scc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:44.499 07:50:07 nvme_scc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:44.499 07:50:07 nvme_scc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.499 07:50:07 nvme_scc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:44.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.499 --rc genhtml_branch_coverage=1 00:13:44.499 --rc genhtml_function_coverage=1 00:13:44.499 --rc genhtml_legend=1 00:13:44.499 --rc geninfo_all_blocks=1 00:13:44.499 --rc geninfo_unexecuted_blocks=1 00:13:44.499 00:13:44.499 ' 00:13:44.499 07:50:07 nvme_scc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:44.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.499 --rc genhtml_branch_coverage=1 00:13:44.499 --rc genhtml_function_coverage=1 00:13:44.499 --rc genhtml_legend=1 00:13:44.499 --rc geninfo_all_blocks=1 00:13:44.499 --rc geninfo_unexecuted_blocks=1 00:13:44.499 00:13:44.499 ' 00:13:44.499 07:50:07 nvme_scc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:44.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.499 --rc genhtml_branch_coverage=1 00:13:44.499 --rc genhtml_function_coverage=1 00:13:44.499 --rc genhtml_legend=1 00:13:44.499 --rc geninfo_all_blocks=1 00:13:44.499 --rc geninfo_unexecuted_blocks=1 00:13:44.499 00:13:44.499 ' 00:13:44.499 07:50:07 nvme_scc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:44.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.499 --rc genhtml_branch_coverage=1 00:13:44.499 --rc genhtml_function_coverage=1 00:13:44.499 --rc genhtml_legend=1 00:13:44.499 --rc geninfo_all_blocks=1 00:13:44.499 --rc geninfo_unexecuted_blocks=1 00:13:44.499 00:13:44.499 ' 00:13:44.499 07:50:07 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.499 07:50:07 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.499 07:50:07 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.499 07:50:07 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.499 07:50:07 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.499 07:50:07 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:44.499 07:50:07 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:44.499 07:50:07 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:44.499 07:50:07 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:44.500 07:50:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:44.500 07:50:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:44.500 07:50:07 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:44.500 07:50:07 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:45.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:45.067 Waiting for block devices as requested 00:13:45.067 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:45.326 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:45.326 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:45.326 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:50.600 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:50.600 07:50:12 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:50.600 07:50:12 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:50.600 07:50:12 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:50.600 07:50:12 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:50.600 07:50:12 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:50.600 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:12 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.601 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.602 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.603 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.604 07:50:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:50.605 07:50:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:50.605 07:50:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:50.605 07:50:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:50.605 07:50:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:50.605 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:50.606 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:50.607 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.608 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:50.609 07:50:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:50.609 07:50:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:50.609 07:50:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:50.609 07:50:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.609 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.610 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.876 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.877 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.878 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:50.879 07:50:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:50.880 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:50.881 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:50.882 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:50.883 07:50:13 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:50.883 07:50:13 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:50.883 07:50:13 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:50.883 07:50:13 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:50.883 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:50.884 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:50.885 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:50.886 07:50:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:50.886 07:50:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:50.887 07:50:13 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:50.887 07:50:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:50.887 07:50:13 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:50.887 07:50:13 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:51.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:52.021 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:52.021 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:52.283 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:52.283 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:52.283 07:50:14 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:52.283 07:50:14 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:52.283 07:50:14 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.283 07:50:14 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:52.283 ************************************ 00:13:52.283 START TEST nvme_simple_copy 00:13:52.283 ************************************ 00:13:52.283 07:50:14 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:52.558 Initializing NVMe Controllers 00:13:52.558 Attaching to 0000:00:10.0 00:13:52.558 Controller supports SCC. Attached to 0000:00:10.0 00:13:52.558 Namespace ID: 1 size: 6GB 00:13:52.558 Initialization complete. 00:13:52.558 00:13:52.558 Controller QEMU NVMe Ctrl (12340 ) 00:13:52.558 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:52.558 Namespace Block Size:4096 00:13:52.558 Writing LBAs 0 to 63 with Random Data 00:13:52.558 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:52.558 LBAs matching Written Data: 64 00:13:52.558 00:13:52.558 real 0m0.349s 00:13:52.558 user 0m0.136s 00:13:52.558 sys 0m0.111s 00:13:52.558 07:50:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.558 07:50:15 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:52.558 ************************************ 00:13:52.558 END TEST nvme_simple_copy 00:13:52.558 ************************************ 00:13:52.816 00:13:52.816 real 0m8.333s 00:13:52.816 user 0m1.482s 00:13:52.816 sys 0m1.798s 00:13:52.816 ************************************ 00:13:52.816 END TEST nvme_scc 00:13:52.816 ************************************ 00:13:52.816 07:50:15 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.816 07:50:15 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:52.816 07:50:15 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:52.816 07:50:15 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:52.816 07:50:15 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:52.816 07:50:15 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:52.816 07:50:15 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:52.816 07:50:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:52.816 07:50:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.816 07:50:15 -- common/autotest_common.sh@10 -- # set +x 00:13:52.816 ************************************ 00:13:52.816 START TEST nvme_fdp 00:13:52.816 ************************************ 00:13:52.816 07:50:15 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:13:52.816 * Looking for test storage... 00:13:52.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:52.816 07:50:15 nvme_fdp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:52.816 07:50:15 nvme_fdp -- common/autotest_common.sh@1689 -- # lcov --version 00:13:52.816 07:50:15 nvme_fdp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:52.816 07:50:15 nvme_fdp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:52.816 07:50:15 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.816 07:50:15 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.816 07:50:15 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.816 07:50:15 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.816 07:50:15 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.816 07:50:15 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.816 07:50:15 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.817 07:50:15 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:52.817 07:50:15 nvme_fdp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.817 07:50:15 nvme_fdp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:52.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.817 --rc genhtml_branch_coverage=1 00:13:52.817 --rc genhtml_function_coverage=1 00:13:52.817 --rc genhtml_legend=1 00:13:52.817 --rc geninfo_all_blocks=1 00:13:52.817 --rc geninfo_unexecuted_blocks=1 00:13:52.817 00:13:52.817 ' 00:13:52.817 07:50:15 nvme_fdp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:52.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.817 --rc genhtml_branch_coverage=1 00:13:52.817 --rc genhtml_function_coverage=1 00:13:52.817 --rc genhtml_legend=1 00:13:52.817 --rc geninfo_all_blocks=1 00:13:52.817 --rc geninfo_unexecuted_blocks=1 00:13:52.817 00:13:52.817 ' 00:13:52.817 07:50:15 nvme_fdp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:52.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.817 --rc genhtml_branch_coverage=1 00:13:52.817 --rc genhtml_function_coverage=1 00:13:52.817 --rc genhtml_legend=1 00:13:52.817 --rc geninfo_all_blocks=1 00:13:52.817 --rc geninfo_unexecuted_blocks=1 00:13:52.817 00:13:52.817 ' 00:13:52.817 07:50:15 nvme_fdp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:52.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.817 --rc genhtml_branch_coverage=1 00:13:52.817 --rc genhtml_function_coverage=1 00:13:52.817 --rc genhtml_legend=1 00:13:52.817 --rc geninfo_all_blocks=1 00:13:52.817 --rc geninfo_unexecuted_blocks=1 00:13:52.817 00:13:52.817 ' 00:13:52.817 07:50:15 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:52.817 07:50:15 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:52.817 07:50:15 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:53.076 07:50:15 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.076 07:50:15 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.076 07:50:15 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.076 07:50:15 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.076 07:50:15 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.076 07:50:15 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.076 07:50:15 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.076 07:50:15 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:53.076 07:50:15 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:53.076 07:50:15 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:53.076 07:50:15 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:53.076 07:50:15 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:53.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:53.593 Waiting for block devices as requested 00:13:53.593 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:53.593 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:53.593 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:53.851 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:59.138 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:59.138 07:50:21 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:59.138 07:50:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:59.138 07:50:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:59.138 07:50:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:59.138 07:50:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:59.138 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:59.139 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:59.140 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.141 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:59.142 07:50:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:59.142 07:50:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:59.142 07:50:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:59.143 07:50:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:59.143 07:50:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:59.143 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:59.144 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.145 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:59.146 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:59.147 07:50:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:59.147 07:50:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:59.147 07:50:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:59.147 07:50:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.147 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:59.148 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:59.149 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:59.150 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:59.151 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.152 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:59.153 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.414 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:59.415 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:59.416 07:50:21 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:59.416 07:50:21 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:59.416 07:50:21 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:59.416 07:50:21 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.416 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.417 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.418 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:59.419 07:50:21 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:59.419 07:50:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:59.420 07:50:21 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:59.420 07:50:21 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:59.420 07:50:21 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:59.420 07:50:21 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:59.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:00.555 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:00.555 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:00.555 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:00.555 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:00.555 07:50:23 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:00.555 07:50:23 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:00.555 07:50:23 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.555 07:50:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:00.555 ************************************ 00:14:00.555 START TEST nvme_flexible_data_placement 00:14:00.555 ************************************ 00:14:00.555 07:50:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:14:00.814 Initializing NVMe Controllers 00:14:00.814 Attaching to 0000:00:13.0 00:14:00.814 Controller supports FDP Attached to 0000:00:13.0 00:14:00.814 Namespace ID: 1 Endurance Group ID: 1 00:14:00.814 Initialization complete. 00:14:00.814 00:14:00.814 ================================== 00:14:00.814 == FDP tests for Namespace: #01 == 00:14:00.814 ================================== 00:14:00.814 00:14:00.814 Get Feature: FDP: 00:14:00.814 ================= 00:14:00.814 Enabled: Yes 00:14:00.814 FDP configuration Index: 0 00:14:00.814 00:14:00.814 FDP configurations log page 00:14:00.814 =========================== 00:14:00.814 Number of FDP configurations: 1 00:14:00.814 Version: 0 00:14:00.814 Size: 112 00:14:00.814 FDP Configuration Descriptor: 0 00:14:00.814 Descriptor Size: 96 00:14:00.814 Reclaim Group Identifier format: 2 00:14:00.814 FDP Volatile Write Cache: Not Present 00:14:00.814 FDP Configuration: Valid 00:14:00.814 Vendor Specific Size: 0 00:14:00.814 Number of Reclaim Groups: 2 00:14:00.814 Number of Recalim Unit Handles: 8 00:14:00.814 Max Placement Identifiers: 128 00:14:00.814 Number of Namespaces Suppprted: 256 00:14:00.814 Reclaim unit Nominal Size: 6000000 bytes 00:14:00.814 Estimated Reclaim Unit Time Limit: Not Reported 00:14:00.814 RUH Desc #000: RUH Type: Initially Isolated 00:14:00.814 RUH Desc #001: RUH Type: Initially Isolated 00:14:00.814 RUH Desc #002: RUH Type: Initially Isolated 00:14:00.814 RUH Desc #003: RUH Type: Initially Isolated 00:14:00.814 RUH Desc #004: RUH Type: Initially Isolated 00:14:00.814 RUH Desc #005: RUH Type: Initially Isolated 00:14:00.814 RUH Desc #006: RUH Type: Initially Isolated 00:14:00.814 RUH Desc #007: RUH Type: Initially Isolated 00:14:00.814 00:14:00.814 FDP reclaim unit handle usage log page 00:14:00.814 ====================================== 00:14:00.814 Number of Reclaim Unit Handles: 8 00:14:00.814 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:14:00.814 RUH Usage Desc #001: RUH Attributes: Unused 00:14:00.814 RUH Usage Desc #002: RUH Attributes: Unused 00:14:00.814 RUH Usage Desc #003: RUH Attributes: Unused 00:14:00.814 RUH Usage Desc #004: RUH Attributes: Unused 00:14:00.814 RUH Usage Desc #005: RUH Attributes: Unused 00:14:00.814 RUH Usage Desc #006: RUH Attributes: Unused 00:14:00.814 RUH Usage Desc #007: RUH Attributes: Unused 00:14:00.814 00:14:00.814 FDP statistics log page 00:14:00.814 ======================= 00:14:00.814 Host bytes with metadata written: 789524480 00:14:00.814 Media bytes with metadata written: 789618688 00:14:00.814 Media bytes erased: 0 00:14:00.814 00:14:00.814 FDP Reclaim unit handle status 00:14:00.814 ============================== 00:14:00.814 Number of RUHS descriptors: 2 00:14:00.814 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000f0d 00:14:00.814 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:14:00.814 00:14:00.814 FDP write on placement id: 0 success 00:14:00.814 00:14:00.814 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:14:00.814 00:14:00.814 IO mgmt send: RUH update for Placement ID: #0 Success 00:14:00.814 00:14:00.814 Get Feature: FDP Events for Placement handle: #0 00:14:00.814 ======================== 00:14:00.814 Number of FDP Events: 6 00:14:00.814 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:14:00.814 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:14:00.814 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:14:00.814 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:14:00.814 FDP Event: #4 Type: Media Reallocated Enabled: No 00:14:00.814 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:14:00.814 00:14:00.814 FDP events log page 00:14:00.814 =================== 00:14:00.814 Number of FDP events: 1 00:14:00.814 FDP Event #0: 00:14:00.814 Event Type: RU Not Written to Capacity 00:14:00.814 Placement Identifier: Valid 00:14:00.814 NSID: Valid 00:14:00.814 Location: Valid 00:14:00.814 Placement Identifier: 0 00:14:00.814 Event Timestamp: 8 00:14:00.814 Namespace Identifier: 1 00:14:00.814 Reclaim Group Identifier: 0 00:14:00.814 Reclaim Unit Handle Identifier: 0 00:14:00.814 00:14:00.814 FDP test passed 00:14:00.814 00:14:00.814 real 0m0.313s 00:14:00.814 user 0m0.111s 00:14:00.814 sys 0m0.099s 00:14:00.814 07:50:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.814 07:50:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:14:00.814 ************************************ 00:14:00.814 END TEST nvme_flexible_data_placement 00:14:00.814 ************************************ 00:14:01.072 ************************************ 00:14:01.072 END TEST nvme_fdp 00:14:01.072 ************************************ 00:14:01.072 00:14:01.072 real 0m8.236s 00:14:01.072 user 0m1.447s 00:14:01.072 sys 0m1.779s 00:14:01.072 07:50:23 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.072 07:50:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:14:01.072 07:50:23 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:14:01.072 07:50:23 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:01.073 07:50:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:01.073 07:50:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:01.073 07:50:23 -- common/autotest_common.sh@10 -- # set +x 00:14:01.073 ************************************ 00:14:01.073 START TEST nvme_rpc 00:14:01.073 ************************************ 00:14:01.073 07:50:23 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:14:01.073 * Looking for test storage... 00:14:01.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:01.073 07:50:23 nvme_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:01.073 07:50:23 nvme_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:14:01.073 07:50:23 nvme_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:01.331 07:50:23 nvme_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.331 07:50:23 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:14:01.331 07:50:23 nvme_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.331 07:50:23 nvme_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:01.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.331 --rc genhtml_branch_coverage=1 00:14:01.331 --rc genhtml_function_coverage=1 00:14:01.331 --rc genhtml_legend=1 00:14:01.331 --rc geninfo_all_blocks=1 00:14:01.331 --rc geninfo_unexecuted_blocks=1 00:14:01.331 00:14:01.331 ' 00:14:01.331 07:50:23 nvme_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:01.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.331 --rc genhtml_branch_coverage=1 00:14:01.331 --rc genhtml_function_coverage=1 00:14:01.331 --rc genhtml_legend=1 00:14:01.331 --rc geninfo_all_blocks=1 00:14:01.331 --rc geninfo_unexecuted_blocks=1 00:14:01.331 00:14:01.331 ' 00:14:01.331 07:50:23 nvme_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:01.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.331 --rc genhtml_branch_coverage=1 00:14:01.331 --rc genhtml_function_coverage=1 00:14:01.331 --rc genhtml_legend=1 00:14:01.331 --rc geninfo_all_blocks=1 00:14:01.332 --rc geninfo_unexecuted_blocks=1 00:14:01.332 00:14:01.332 ' 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:01.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.332 --rc genhtml_branch_coverage=1 00:14:01.332 --rc genhtml_function_coverage=1 00:14:01.332 --rc genhtml_legend=1 00:14:01.332 --rc geninfo_all_blocks=1 00:14:01.332 --rc geninfo_unexecuted_blocks=1 00:14:01.332 00:14:01.332 ' 00:14:01.332 07:50:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:01.332 07:50:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1505 -- # bdfs=() 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1505 -- # local bdfs 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1494 -- # bdfs=() 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1494 -- # local bdfs 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@1508 -- # echo 0000:00:10.0 00:14:01.332 07:50:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:14:01.332 07:50:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67386 00:14:01.332 07:50:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:01.332 07:50:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:14:01.332 07:50:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67386 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 67386 ']' 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.332 07:50:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.332 [2024-11-06 07:50:23.952627] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:14:01.332 [2024-11-06 07:50:23.952831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67386 ] 00:14:01.590 [2024-11-06 07:50:24.158129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:01.849 [2024-11-06 07:50:24.308271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.849 [2024-11-06 07:50:24.308296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.816 07:50:25 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.816 07:50:25 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:02.816 07:50:25 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:14:03.074 Nvme0n1 00:14:03.074 07:50:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:14:03.074 07:50:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:14:03.331 request: 00:14:03.331 { 00:14:03.331 "bdev_name": "Nvme0n1", 00:14:03.331 "filename": "non_existing_file", 00:14:03.331 "method": "bdev_nvme_apply_firmware", 00:14:03.331 "req_id": 1 00:14:03.331 } 00:14:03.331 Got JSON-RPC error response 00:14:03.331 response: 00:14:03.331 { 00:14:03.331 "code": -32603, 00:14:03.331 "message": "open file failed." 00:14:03.331 } 00:14:03.331 07:50:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:14:03.331 07:50:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:14:03.332 07:50:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:03.590 07:50:26 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:03.590 07:50:26 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67386 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 67386 ']' 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 67386 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67386 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.590 killing process with pid 67386 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67386' 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@969 -- # kill 67386 00:14:03.590 07:50:26 nvme_rpc -- common/autotest_common.sh@974 -- # wait 67386 00:14:06.120 00:14:06.120 real 0m4.740s 00:14:06.120 user 0m9.036s 00:14:06.120 sys 0m0.803s 00:14:06.120 07:50:28 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.120 ************************************ 00:14:06.120 END TEST nvme_rpc 00:14:06.120 ************************************ 00:14:06.120 07:50:28 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.120 07:50:28 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:06.120 07:50:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:06.120 07:50:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.120 07:50:28 -- common/autotest_common.sh@10 -- # set +x 00:14:06.120 ************************************ 00:14:06.120 START TEST nvme_rpc_timeouts 00:14:06.120 ************************************ 00:14:06.120 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:14:06.120 * Looking for test storage... 00:14:06.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:06.120 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:06.120 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # lcov --version 00:14:06.120 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:06.120 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:06.120 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.120 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.120 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.120 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.120 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.120 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.120 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.120 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.121 07:50:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:06.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.121 --rc genhtml_branch_coverage=1 00:14:06.121 --rc genhtml_function_coverage=1 00:14:06.121 --rc genhtml_legend=1 00:14:06.121 --rc geninfo_all_blocks=1 00:14:06.121 --rc geninfo_unexecuted_blocks=1 00:14:06.121 00:14:06.121 ' 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:06.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.121 --rc genhtml_branch_coverage=1 00:14:06.121 --rc genhtml_function_coverage=1 00:14:06.121 --rc genhtml_legend=1 00:14:06.121 --rc geninfo_all_blocks=1 00:14:06.121 --rc geninfo_unexecuted_blocks=1 00:14:06.121 00:14:06.121 ' 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:06.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.121 --rc genhtml_branch_coverage=1 00:14:06.121 --rc genhtml_function_coverage=1 00:14:06.121 --rc genhtml_legend=1 00:14:06.121 --rc geninfo_all_blocks=1 00:14:06.121 --rc geninfo_unexecuted_blocks=1 00:14:06.121 00:14:06.121 ' 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:06.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.121 --rc genhtml_branch_coverage=1 00:14:06.121 --rc genhtml_function_coverage=1 00:14:06.121 --rc genhtml_legend=1 00:14:06.121 --rc geninfo_all_blocks=1 00:14:06.121 --rc geninfo_unexecuted_blocks=1 00:14:06.121 00:14:06.121 ' 00:14:06.121 07:50:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.121 07:50:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67468 00:14:06.121 07:50:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67468 00:14:06.121 07:50:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67504 00:14:06.121 07:50:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:14:06.121 07:50:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:14:06.121 07:50:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67504 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 67504 ']' 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.121 07:50:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:06.121 [2024-11-06 07:50:28.670654] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:14:06.121 [2024-11-06 07:50:28.670842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67504 ] 00:14:06.379 [2024-11-06 07:50:28.861952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:06.637 [2024-11-06 07:50:29.021382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.637 [2024-11-06 07:50:29.021393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.579 07:50:29 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.579 07:50:29 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:14:07.579 Checking default timeout settings: 00:14:07.579 07:50:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:14:07.579 07:50:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:07.837 Making settings changes with rpc: 00:14:07.837 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:14:07.837 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:14:08.096 Check default vs. modified settings: 00:14:08.096 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:14:08.096 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:08.662 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:14:08.662 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:08.662 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67468 00:14:08.662 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:08.662 07:50:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67468 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:14:08.662 Setting action_on_timeout is changed as expected. 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67468 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67468 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:14:08.662 Setting timeout_us is changed as expected. 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67468 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67468 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:14:08.662 Setting timeout_admin_us is changed as expected. 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67468 /tmp/settings_modified_67468 00:14:08.662 07:50:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67504 00:14:08.662 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 67504 ']' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 67504 00:14:08.662 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:14:08.662 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.662 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67504 00:14:08.662 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.662 killing process with pid 67504 00:14:08.663 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.663 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67504' 00:14:08.663 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 67504 00:14:08.663 07:50:31 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 67504 00:14:11.193 RPC TIMEOUT SETTING TEST PASSED. 00:14:11.193 07:50:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:14:11.193 00:14:11.193 real 0m4.984s 00:14:11.193 user 0m9.624s 00:14:11.193 sys 0m0.816s 00:14:11.193 07:50:33 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.193 07:50:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:14:11.193 ************************************ 00:14:11.193 END TEST nvme_rpc_timeouts 00:14:11.193 ************************************ 00:14:11.193 07:50:33 -- spdk/autotest.sh@239 -- # uname -s 00:14:11.193 07:50:33 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:14:11.193 07:50:33 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:11.193 07:50:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:11.193 07:50:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.193 07:50:33 -- common/autotest_common.sh@10 -- # set +x 00:14:11.193 ************************************ 00:14:11.193 START TEST sw_hotplug 00:14:11.193 ************************************ 00:14:11.193 07:50:33 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:14:11.193 * Looking for test storage... 00:14:11.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:14:11.193 07:50:33 sw_hotplug -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:11.193 07:50:33 sw_hotplug -- common/autotest_common.sh@1689 -- # lcov --version 00:14:11.193 07:50:33 sw_hotplug -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:11.193 07:50:33 sw_hotplug -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:11.193 07:50:33 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.194 07:50:33 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:14:11.194 07:50:33 sw_hotplug -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.194 07:50:33 sw_hotplug -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:11.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.194 --rc genhtml_branch_coverage=1 00:14:11.194 --rc genhtml_function_coverage=1 00:14:11.194 --rc genhtml_legend=1 00:14:11.194 --rc geninfo_all_blocks=1 00:14:11.194 --rc geninfo_unexecuted_blocks=1 00:14:11.194 00:14:11.194 ' 00:14:11.194 07:50:33 sw_hotplug -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:11.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.194 --rc genhtml_branch_coverage=1 00:14:11.194 --rc genhtml_function_coverage=1 00:14:11.194 --rc genhtml_legend=1 00:14:11.194 --rc geninfo_all_blocks=1 00:14:11.194 --rc geninfo_unexecuted_blocks=1 00:14:11.194 00:14:11.194 ' 00:14:11.194 07:50:33 sw_hotplug -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:11.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.194 --rc genhtml_branch_coverage=1 00:14:11.194 --rc genhtml_function_coverage=1 00:14:11.194 --rc genhtml_legend=1 00:14:11.194 --rc geninfo_all_blocks=1 00:14:11.194 --rc geninfo_unexecuted_blocks=1 00:14:11.194 00:14:11.194 ' 00:14:11.194 07:50:33 sw_hotplug -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:11.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.194 --rc genhtml_branch_coverage=1 00:14:11.194 --rc genhtml_function_coverage=1 00:14:11.194 --rc genhtml_legend=1 00:14:11.194 --rc geninfo_all_blocks=1 00:14:11.194 --rc geninfo_unexecuted_blocks=1 00:14:11.194 00:14:11.194 ' 00:14:11.194 07:50:33 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:11.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:11.710 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:11.710 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:11.710 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:11.710 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:11.710 07:50:34 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:14:11.710 07:50:34 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:14:11.710 07:50:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:14:11.710 07:50:34 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@233 -- # local class 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@18 -- # local i 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:14:11.710 07:50:34 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:14:11.710 07:50:34 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:14:11.710 07:50:34 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:14:11.710 07:50:34 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:11.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:12.227 Waiting for block devices as requested 00:14:12.227 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:12.486 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:12.486 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:12.486 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:17.754 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:17.754 07:50:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:14:17.754 07:50:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:18.011 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:14:18.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:18.012 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:14:18.578 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:14:18.836 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:18.836 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:14:18.836 07:50:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68390 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:14:18.836 07:50:41 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:14:18.836 07:50:41 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:14:18.836 07:50:41 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:14:18.836 07:50:41 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:14:18.836 07:50:41 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:18.836 07:50:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:19.094 Initializing NVMe Controllers 00:14:19.094 Attaching to 0000:00:10.0 00:14:19.094 Attaching to 0000:00:11.0 00:14:19.094 Attached to 0000:00:10.0 00:14:19.094 Attached to 0000:00:11.0 00:14:19.094 Initialization complete. Starting I/O... 00:14:19.094 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:19.094 QEMU NVMe Ctrl (12341 ): 2 I/Os completed (+2) 00:14:19.094 00:14:20.467 QEMU NVMe Ctrl (12340 ): 1094 I/Os completed (+1094) 00:14:20.467 QEMU NVMe Ctrl (12341 ): 1113 I/Os completed (+1111) 00:14:20.467 00:14:21.400 QEMU NVMe Ctrl (12340 ): 2370 I/Os completed (+1276) 00:14:21.401 QEMU NVMe Ctrl (12341 ): 2526 I/Os completed (+1413) 00:14:21.401 00:14:22.336 QEMU NVMe Ctrl (12340 ): 3750 I/Os completed (+1380) 00:14:22.336 QEMU NVMe Ctrl (12341 ): 3949 I/Os completed (+1423) 00:14:22.336 00:14:23.272 QEMU NVMe Ctrl (12340 ): 5334 I/Os completed (+1584) 00:14:23.272 QEMU NVMe Ctrl (12341 ): 5590 I/Os completed (+1641) 00:14:23.272 00:14:24.207 QEMU NVMe Ctrl (12340 ): 6954 I/Os completed (+1620) 00:14:24.207 QEMU NVMe Ctrl (12341 ): 7222 I/Os completed (+1632) 00:14:24.207 00:14:25.142 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:25.142 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:25.142 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:25.142 [2024-11-06 07:50:47.421069] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:25.142 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:25.142 [2024-11-06 07:50:47.424197] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.424358] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.424430] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.424506] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:25.142 [2024-11-06 07:50:47.428106] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.428205] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.428272] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.428318] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:25.142 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:25.142 [2024-11-06 07:50:47.456556] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:25.142 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:25.142 [2024-11-06 07:50:47.458991] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.459233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.459323] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 [2024-11-06 07:50:47.459372] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.142 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:25.142 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:25.142 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:25.142 [2024-11-06 07:50:47.462906] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.143 [2024-11-06 07:50:47.462973] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.143 [2024-11-06 07:50:47.463024] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.143 [2024-11-06 07:50:47.463063] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:25.143 Attaching to 0000:00:10.0 00:14:25.143 Attached to 0000:00:10.0 00:14:25.143 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:25.143 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:25.143 07:50:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:25.401 Attaching to 0000:00:11.0 00:14:25.401 Attached to 0000:00:11.0 00:14:26.334 QEMU NVMe Ctrl (12340 ): 1560 I/Os completed (+1560) 00:14:26.334 QEMU NVMe Ctrl (12341 ): 1446 I/Os completed (+1446) 00:14:26.334 00:14:27.268 QEMU NVMe Ctrl (12340 ): 2984 I/Os completed (+1424) 00:14:27.268 QEMU NVMe Ctrl (12341 ): 2889 I/Os completed (+1443) 00:14:27.268 00:14:28.220 QEMU NVMe Ctrl (12340 ): 4472 I/Os completed (+1488) 00:14:28.220 QEMU NVMe Ctrl (12341 ): 4414 I/Os completed (+1525) 00:14:28.220 00:14:29.156 QEMU NVMe Ctrl (12340 ): 5992 I/Os completed (+1520) 00:14:29.156 QEMU NVMe Ctrl (12341 ): 5958 I/Os completed (+1544) 00:14:29.157 00:14:30.091 QEMU NVMe Ctrl (12340 ): 7464 I/Os completed (+1472) 00:14:30.091 QEMU NVMe Ctrl (12341 ): 7447 I/Os completed (+1489) 00:14:30.091 00:14:31.466 QEMU NVMe Ctrl (12340 ): 9071 I/Os completed (+1607) 00:14:31.466 QEMU NVMe Ctrl (12341 ): 9071 I/Os completed (+1624) 00:14:31.466 00:14:32.399 QEMU NVMe Ctrl (12340 ): 10611 I/Os completed (+1540) 00:14:32.399 QEMU NVMe Ctrl (12341 ): 10650 I/Os completed (+1579) 00:14:32.399 00:14:33.332 QEMU NVMe Ctrl (12340 ): 12001 I/Os completed (+1390) 00:14:33.332 QEMU NVMe Ctrl (12341 ): 12087 I/Os completed (+1437) 00:14:33.332 00:14:34.266 QEMU NVMe Ctrl (12340 ): 13421 I/Os completed (+1420) 00:14:34.266 QEMU NVMe Ctrl (12341 ): 13596 I/Os completed (+1509) 00:14:34.266 00:14:35.225 QEMU NVMe Ctrl (12340 ): 15018 I/Os completed (+1597) 00:14:35.225 QEMU NVMe Ctrl (12341 ): 15225 I/Os completed (+1629) 00:14:35.225 00:14:36.157 QEMU NVMe Ctrl (12340 ): 16678 I/Os completed (+1660) 00:14:36.157 QEMU NVMe Ctrl (12341 ): 16926 I/Os completed (+1701) 00:14:36.157 00:14:37.093 QEMU NVMe Ctrl (12340 ): 18346 I/Os completed (+1668) 00:14:37.093 QEMU NVMe Ctrl (12341 ): 18612 I/Os completed (+1686) 00:14:37.093 00:14:37.351 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:37.351 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:37.351 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:37.351 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:37.351 [2024-11-06 07:50:59.757817] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:37.351 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:37.351 [2024-11-06 07:50:59.760635] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.351 [2024-11-06 07:50:59.760861] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.351 [2024-11-06 07:50:59.760911] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.351 [2024-11-06 07:50:59.760945] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.351 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:37.351 [2024-11-06 07:50:59.764889] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.351 [2024-11-06 07:50:59.764969] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.351 [2024-11-06 07:50:59.765001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.351 [2024-11-06 07:50:59.765030] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:37.352 [2024-11-06 07:50:59.780487] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:37.352 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:37.352 [2024-11-06 07:50:59.782984] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 [2024-11-06 07:50:59.783044] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 [2024-11-06 07:50:59.783086] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 [2024-11-06 07:50:59.783115] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:37.352 [2024-11-06 07:50:59.786811] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 [2024-11-06 07:50:59.787042] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:37.352 [2024-11-06 07:50:59.787237] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 [2024-11-06 07:50:59.787296] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:37.352 07:50:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:37.352 Attaching to 0000:00:10.0 00:14:37.352 Attached to 0000:00:10.0 00:14:37.609 07:51:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:37.609 07:51:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:37.609 07:51:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:37.609 Attaching to 0000:00:11.0 00:14:37.609 Attached to 0000:00:11.0 00:14:38.182 QEMU NVMe Ctrl (12340 ): 1048 I/Os completed (+1048) 00:14:38.182 QEMU NVMe Ctrl (12341 ): 959 I/Os completed (+959) 00:14:38.182 00:14:39.116 QEMU NVMe Ctrl (12340 ): 2439 I/Os completed (+1391) 00:14:39.116 QEMU NVMe Ctrl (12341 ): 2358 I/Os completed (+1399) 00:14:39.116 00:14:40.050 QEMU NVMe Ctrl (12340 ): 3815 I/Os completed (+1376) 00:14:40.050 QEMU NVMe Ctrl (12341 ): 3786 I/Os completed (+1428) 00:14:40.050 00:14:41.425 QEMU NVMe Ctrl (12340 ): 5307 I/Os completed (+1492) 00:14:41.425 QEMU NVMe Ctrl (12341 ): 5354 I/Os completed (+1568) 00:14:41.425 00:14:42.361 QEMU NVMe Ctrl (12340 ): 6787 I/Os completed (+1480) 00:14:42.361 QEMU NVMe Ctrl (12341 ): 6902 I/Os completed (+1548) 00:14:42.361 00:14:43.296 QEMU NVMe Ctrl (12340 ): 8259 I/Os completed (+1472) 00:14:43.296 QEMU NVMe Ctrl (12341 ): 8436 I/Os completed (+1534) 00:14:43.296 00:14:44.231 QEMU NVMe Ctrl (12340 ): 9625 I/Os completed (+1366) 00:14:44.231 QEMU NVMe Ctrl (12341 ): 9876 I/Os completed (+1440) 00:14:44.231 00:14:45.170 QEMU NVMe Ctrl (12340 ): 11141 I/Os completed (+1516) 00:14:45.170 QEMU NVMe Ctrl (12341 ): 11402 I/Os completed (+1526) 00:14:45.170 00:14:46.105 QEMU NVMe Ctrl (12340 ): 12689 I/Os completed (+1548) 00:14:46.105 QEMU NVMe Ctrl (12341 ): 12959 I/Os completed (+1557) 00:14:46.105 00:14:47.046 QEMU NVMe Ctrl (12340 ): 14165 I/Os completed (+1476) 00:14:47.046 QEMU NVMe Ctrl (12341 ): 14467 I/Os completed (+1508) 00:14:47.046 00:14:48.419 QEMU NVMe Ctrl (12340 ): 15633 I/Os completed (+1468) 00:14:48.419 QEMU NVMe Ctrl (12341 ): 15972 I/Os completed (+1505) 00:14:48.419 00:14:49.355 QEMU NVMe Ctrl (12340 ): 17065 I/Os completed (+1432) 00:14:49.355 QEMU NVMe Ctrl (12341 ): 17441 I/Os completed (+1469) 00:14:49.355 00:14:49.614 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:49.614 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:49.614 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:49.614 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:49.614 [2024-11-06 07:51:12.059205] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:49.614 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:49.614 [2024-11-06 07:51:12.061671] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.614 [2024-11-06 07:51:12.061775] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.614 [2024-11-06 07:51:12.061815] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.614 [2024-11-06 07:51:12.061854] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.614 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:49.614 [2024-11-06 07:51:12.065574] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.614 [2024-11-06 07:51:12.065651] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.614 [2024-11-06 07:51:12.065680] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.614 [2024-11-06 07:51:12.065708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.614 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:49.614 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:49.614 EAL: Cannot open sysfs resource 00:14:49.614 EAL: pci_scan_one(): cannot parse resource 00:14:49.614 EAL: Scan for (pci) bus failed. 00:14:49.615 [2024-11-06 07:51:12.090928] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:49.615 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:49.615 [2024-11-06 07:51:12.093238] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.615 [2024-11-06 07:51:12.093383] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.615 [2024-11-06 07:51:12.093457] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.615 [2024-11-06 07:51:12.093605] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.615 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:49.615 [2024-11-06 07:51:12.096927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.615 [2024-11-06 07:51:12.097113] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.615 [2024-11-06 07:51:12.097190] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.615 [2024-11-06 07:51:12.097332] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:49.615 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:49.615 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:49.615 EAL: Scan for (pci) bus failed. 00:14:49.615 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:49.615 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:49.615 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:49.615 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:49.873 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:49.873 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:49.873 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:49.873 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:49.873 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:49.873 Attaching to 0000:00:10.0 00:14:49.873 Attached to 0000:00:10.0 00:14:49.873 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:49.873 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:49.873 07:51:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:49.873 Attaching to 0000:00:11.0 00:14:49.873 Attached to 0000:00:11.0 00:14:49.873 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:49.873 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:49.873 [2024-11-06 07:51:12.427077] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:15:02.084 07:51:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:15:02.084 07:51:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:02.084 07:51:24 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.00 00:15:02.084 07:51:24 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.00 00:15:02.084 07:51:24 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:15:02.084 07:51:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.00 00:15:02.084 07:51:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.00 2 00:15:02.084 remove_attach_helper took 43.00s to complete (handling 2 nvme drive(s)) 07:51:24 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:15:08.645 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68390 00:15:08.645 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68390) - No such process 00:15:08.645 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68390 00:15:08.645 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:15:08.645 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:15:08.645 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:15:08.645 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68933 00:15:08.646 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.646 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:15:08.646 07:51:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68933 00:15:08.646 07:51:30 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 68933 ']' 00:15:08.646 07:51:30 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.646 07:51:30 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.646 07:51:30 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.646 07:51:30 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.646 07:51:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:08.646 [2024-11-06 07:51:30.559078] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:15:08.646 [2024-11-06 07:51:30.559590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68933 ] 00:15:08.646 [2024-11-06 07:51:30.742703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.646 [2024-11-06 07:51:30.902773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:15:09.214 07:51:31 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:09.214 07:51:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:15.780 07:51:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.780 07:51:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:15.780 [2024-11-06 07:51:37.912808] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:15.780 [2024-11-06 07:51:37.915977] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.780 [2024-11-06 07:51:37.916186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.780 [2024-11-06 07:51:37.916225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.780 [2024-11-06 07:51:37.916274] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.780 [2024-11-06 07:51:37.916296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.780 [2024-11-06 07:51:37.916316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.780 [2024-11-06 07:51:37.916334] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.780 [2024-11-06 07:51:37.916354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.780 [2024-11-06 07:51:37.916370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.780 [2024-11-06 07:51:37.916395] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.780 [2024-11-06 07:51:37.916411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.780 07:51:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.780 [2024-11-06 07:51:37.916442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:15.780 07:51:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:15.780 [2024-11-06 07:51:38.312849] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:15.780 [2024-11-06 07:51:38.315946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.780 [2024-11-06 07:51:38.316009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.780 [2024-11-06 07:51:38.316035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.780 [2024-11-06 07:51:38.316064] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.780 [2024-11-06 07:51:38.316084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.780 [2024-11-06 07:51:38.316100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.780 [2024-11-06 07:51:38.316136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.780 [2024-11-06 07:51:38.316168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.780 [2024-11-06 07:51:38.316187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.780 [2024-11-06 07:51:38.316204] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.780 [2024-11-06 07:51:38.316224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.780 [2024-11-06 07:51:38.316240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:16.040 07:51:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.040 07:51:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:16.040 07:51:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:16.040 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:16.298 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:16.298 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:16.298 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:16.298 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:16.298 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:16.298 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:16.298 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:16.299 07:51:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:28.502 07:51:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.502 07:51:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:28.502 07:51:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:28.502 07:51:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.502 07:51:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:28.502 [2024-11-06 07:51:50.913063] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:28.502 [2024-11-06 07:51:50.916265] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.502 [2024-11-06 07:51:50.916364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.502 [2024-11-06 07:51:50.916389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.502 [2024-11-06 07:51:50.916421] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.502 [2024-11-06 07:51:50.916450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.502 [2024-11-06 07:51:50.916487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.502 [2024-11-06 07:51:50.916505] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.502 [2024-11-06 07:51:50.916525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.502 [2024-11-06 07:51:50.916541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.502 [2024-11-06 07:51:50.916561] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.502 [2024-11-06 07:51:50.916578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.502 [2024-11-06 07:51:50.916597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.502 07:51:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:28.502 07:51:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:29.069 [2024-11-06 07:51:51.413056] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:29.069 [2024-11-06 07:51:51.415990] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.069 [2024-11-06 07:51:51.416054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.069 [2024-11-06 07:51:51.416092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.069 [2024-11-06 07:51:51.416119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.069 [2024-11-06 07:51:51.416139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.069 [2024-11-06 07:51:51.416155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.069 [2024-11-06 07:51:51.416173] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.069 [2024-11-06 07:51:51.416189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.069 [2024-11-06 07:51:51.416207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.069 [2024-11-06 07:51:51.416222] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:29.069 [2024-11-06 07:51:51.416251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.069 [2024-11-06 07:51:51.416322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:29.069 07:51:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.069 07:51:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.069 07:51:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:29.069 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:29.328 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:29.328 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:29.328 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:29.328 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:29.328 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:29.328 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:29.328 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:29.328 07:51:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:41.531 07:52:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.531 07:52:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:41.531 07:52:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:41.531 [2024-11-06 07:52:03.913376] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:41.531 [2024-11-06 07:52:03.917027] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:41.531 [2024-11-06 07:52:03.917090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.531 [2024-11-06 07:52:03.917114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.531 [2024-11-06 07:52:03.917147] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:41.531 [2024-11-06 07:52:03.917166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.531 [2024-11-06 07:52:03.917188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.531 [2024-11-06 07:52:03.917207] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:41.531 [2024-11-06 07:52:03.917227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.531 [2024-11-06 07:52:03.917243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.531 [2024-11-06 07:52:03.917304] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:41.531 [2024-11-06 07:52:03.917322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.531 [2024-11-06 07:52:03.917342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:41.531 07:52:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.531 07:52:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:41.531 07:52:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:41.531 07:52:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:41.791 [2024-11-06 07:52:04.313355] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:41.791 [2024-11-06 07:52:04.316886] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:41.791 [2024-11-06 07:52:04.316959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.791 [2024-11-06 07:52:04.316987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.791 [2024-11-06 07:52:04.317017] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:41.791 [2024-11-06 07:52:04.317048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.791 [2024-11-06 07:52:04.317066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.791 [2024-11-06 07:52:04.317088] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:41.791 [2024-11-06 07:52:04.317106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.791 [2024-11-06 07:52:04.317129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.791 [2024-11-06 07:52:04.317146] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:41.791 [2024-11-06 07:52:04.317165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.791 [2024-11-06 07:52:04.317182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:42.049 07:52:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.049 07:52:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.049 07:52:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:42.049 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:42.308 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:42.308 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:42.308 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:42.308 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:42.308 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:42.308 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:42.308 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:42.308 07:52:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.09 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.09 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.09 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.09 2 00:15:54.552 remove_attach_helper took 45.09s to complete (handling 2 nvme drive(s)) 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:15:54.552 07:52:16 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:54.552 07:52:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:16:01.109 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:01.109 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:01.109 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:01.109 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:01.109 07:52:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:01.109 07:52:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.109 07:52:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:01.109 [2024-11-06 07:52:23.041071] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:01.109 [2024-11-06 07:52:23.044076] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.109 [2024-11-06 07:52:23.044136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.109 [2024-11-06 07:52:23.044162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.109 [2024-11-06 07:52:23.044195] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.109 [2024-11-06 07:52:23.044218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.109 [2024-11-06 07:52:23.044238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.109 [2024-11-06 07:52:23.044276] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.109 [2024-11-06 07:52:23.044299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.109 [2024-11-06 07:52:23.044316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.109 [2024-11-06 07:52:23.044337] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.109 [2024-11-06 07:52:23.044354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.109 [2024-11-06 07:52:23.044376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.109 07:52:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:01.109 07:52:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.109 07:52:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:01.109 07:52:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.109 [2024-11-06 07:52:23.641084] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:01.109 [2024-11-06 07:52:23.643305] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.109 [2024-11-06 07:52:23.643516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.109 [2024-11-06 07:52:23.643558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.109 [2024-11-06 07:52:23.643589] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.109 [2024-11-06 07:52:23.643610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.109 [2024-11-06 07:52:23.643628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.109 [2024-11-06 07:52:23.643650] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.109 [2024-11-06 07:52:23.643667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.109 [2024-11-06 07:52:23.643687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.109 [2024-11-06 07:52:23.643704] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:01.109 [2024-11-06 07:52:23.643724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:01.109 [2024-11-06 07:52:23.643741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:01.109 07:52:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:01.676 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:01.676 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:01.676 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:01.676 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:01.676 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:01.676 07:52:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.676 07:52:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:01.676 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:01.676 07:52:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.676 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:01.676 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:01.934 07:52:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:14.138 07:52:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.138 07:52:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:14.138 07:52:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:14.138 07:52:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.138 07:52:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:14.138 [2024-11-06 07:52:36.641288] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:14.138 [2024-11-06 07:52:36.643706] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:14.138 [2024-11-06 07:52:36.643879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.138 [2024-11-06 07:52:36.644078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.138 [2024-11-06 07:52:36.644297] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:14.138 [2024-11-06 07:52:36.644444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.138 [2024-11-06 07:52:36.644627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.138 [2024-11-06 07:52:36.644794] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:14.138 [2024-11-06 07:52:36.644943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.138 [2024-11-06 07:52:36.645096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.138 [2024-11-06 07:52:36.645279] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:14.138 [2024-11-06 07:52:36.645410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.138 [2024-11-06 07:52:36.645574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.138 07:52:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:14.138 07:52:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:14.704 [2024-11-06 07:52:37.041304] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:14.704 [2024-11-06 07:52:37.043814] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:14.704 [2024-11-06 07:52:37.043985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.704 [2024-11-06 07:52:37.044145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.704 [2024-11-06 07:52:37.044338] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:14.704 [2024-11-06 07:52:37.044478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.704 [2024-11-06 07:52:37.044676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.704 [2024-11-06 07:52:37.044945] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:14.704 [2024-11-06 07:52:37.045073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.704 [2024-11-06 07:52:37.045245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.704 [2024-11-06 07:52:37.045435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:14.704 [2024-11-06 07:52:37.045497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.704 [2024-11-06 07:52:37.045729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:14.705 07:52:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.705 07:52:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:14.705 07:52:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:14.705 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:14.963 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:14.963 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:14.963 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:14.963 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:14.963 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:14.963 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:14.963 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:14.963 07:52:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:27.213 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:27.213 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:27.213 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:27.213 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:27.213 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:27.213 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:27.213 07:52:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.213 07:52:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:27.213 07:52:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.213 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:27.214 07:52:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.214 07:52:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 07:52:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.214 [2024-11-06 07:52:49.641530] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:27.214 [2024-11-06 07:52:49.647190] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.214 [2024-11-06 07:52:49.647410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.214 [2024-11-06 07:52:49.647661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.214 [2024-11-06 07:52:49.647919] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.214 [2024-11-06 07:52:49.648046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.214 [2024-11-06 07:52:49.648200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.214 [2024-11-06 07:52:49.648478] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.214 [2024-11-06 07:52:49.648701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.214 [2024-11-06 07:52:49.648840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.214 [2024-11-06 07:52:49.648996] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.214 [2024-11-06 07:52:49.649160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.214 [2024-11-06 07:52:49.649343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:16:27.214 07:52:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:27.471 [2024-11-06 07:52:50.041530] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:27.471 [2024-11-06 07:52:50.044057] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.471 [2024-11-06 07:52:50.044240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.471 [2024-11-06 07:52:50.044423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.471 [2024-11-06 07:52:50.044606] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.471 [2024-11-06 07:52:50.044666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.471 [2024-11-06 07:52:50.044972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.471 [2024-11-06 07:52:50.045144] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.471 [2024-11-06 07:52:50.045381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.471 [2024-11-06 07:52:50.045558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.471 [2024-11-06 07:52:50.045780] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:27.471 [2024-11-06 07:52:50.046026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.471 [2024-11-06 07:52:50.046185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:27.730 07:52:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.730 07:52:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:27.730 07:52:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:27.730 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:27.987 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:27.987 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:27.987 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:27.987 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:27.987 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:27.987 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:27.987 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:27.987 07:52:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.61 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.61 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.61 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.61 2 00:16:40.190 remove_attach_helper took 45.61s to complete (handling 2 nvme drive(s)) 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:40.190 07:53:02 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68933 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 68933 ']' 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 68933 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68933 00:16:40.190 killing process with pid 68933 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68933' 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@969 -- # kill 68933 00:16:40.190 07:53:02 sw_hotplug -- common/autotest_common.sh@974 -- # wait 68933 00:16:42.721 07:53:04 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:42.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:43.287 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:43.287 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:43.558 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:43.558 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:43.558 00:16:43.558 real 2m32.702s 00:16:43.558 user 1m53.332s 00:16:43.558 sys 0m19.237s 00:16:43.558 07:53:06 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.558 07:53:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:43.558 ************************************ 00:16:43.558 END TEST sw_hotplug 00:16:43.558 ************************************ 00:16:43.558 07:53:06 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:43.558 07:53:06 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:43.558 07:53:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:43.558 07:53:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.558 07:53:06 -- common/autotest_common.sh@10 -- # set +x 00:16:43.558 ************************************ 00:16:43.558 START TEST nvme_xnvme 00:16:43.558 ************************************ 00:16:43.558 07:53:06 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:43.841 * Looking for test storage... 00:16:43.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1689 -- # lcov --version 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:43.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.841 --rc genhtml_branch_coverage=1 00:16:43.841 --rc genhtml_function_coverage=1 00:16:43.841 --rc genhtml_legend=1 00:16:43.841 --rc geninfo_all_blocks=1 00:16:43.841 --rc geninfo_unexecuted_blocks=1 00:16:43.841 00:16:43.841 ' 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:43.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.841 --rc genhtml_branch_coverage=1 00:16:43.841 --rc genhtml_function_coverage=1 00:16:43.841 --rc genhtml_legend=1 00:16:43.841 --rc geninfo_all_blocks=1 00:16:43.841 --rc geninfo_unexecuted_blocks=1 00:16:43.841 00:16:43.841 ' 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:43.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.841 --rc genhtml_branch_coverage=1 00:16:43.841 --rc genhtml_function_coverage=1 00:16:43.841 --rc genhtml_legend=1 00:16:43.841 --rc geninfo_all_blocks=1 00:16:43.841 --rc geninfo_unexecuted_blocks=1 00:16:43.841 00:16:43.841 ' 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:43.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.841 --rc genhtml_branch_coverage=1 00:16:43.841 --rc genhtml_function_coverage=1 00:16:43.841 --rc genhtml_legend=1 00:16:43.841 --rc geninfo_all_blocks=1 00:16:43.841 --rc geninfo_unexecuted_blocks=1 00:16:43.841 00:16:43.841 ' 00:16:43.841 07:53:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.841 07:53:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.841 07:53:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.841 07:53:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.841 07:53:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.841 07:53:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:43.841 07:53:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.841 07:53:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.841 07:53:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:43.841 ************************************ 00:16:43.841 START TEST xnvme_to_malloc_dd_copy 00:16:43.841 ************************************ 00:16:43.841 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:43.842 07:53:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:43.842 { 00:16:43.842 "subsystems": [ 00:16:43.842 { 00:16:43.842 "subsystem": "bdev", 00:16:43.842 "config": [ 00:16:43.842 { 00:16:43.842 "params": { 00:16:43.842 "block_size": 512, 00:16:43.842 "num_blocks": 2097152, 00:16:43.842 "name": "malloc0" 00:16:43.842 }, 00:16:43.842 "method": "bdev_malloc_create" 00:16:43.842 }, 00:16:43.842 { 00:16:43.842 "params": { 00:16:43.842 "io_mechanism": "libaio", 00:16:43.842 "filename": "/dev/nullb0", 00:16:43.842 "name": "null0" 00:16:43.842 }, 00:16:43.842 "method": "bdev_xnvme_create" 00:16:43.842 }, 00:16:43.842 { 00:16:43.842 "method": "bdev_wait_for_examine" 00:16:43.842 } 00:16:43.842 ] 00:16:43.842 } 00:16:43.842 ] 00:16:43.842 } 00:16:44.101 [2024-11-06 07:53:06.488437] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:16:44.101 [2024-11-06 07:53:06.488652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70299 ] 00:16:44.101 [2024-11-06 07:53:06.678055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.359 [2024-11-06 07:53:06.869270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.891  [2024-11-06T07:53:10.538Z] Copying: 161/1024 [MB] (161 MBps) [2024-11-06T07:53:11.913Z] Copying: 320/1024 [MB] (158 MBps) [2024-11-06T07:53:12.848Z] Copying: 477/1024 [MB] (156 MBps) [2024-11-06T07:53:13.783Z] Copying: 638/1024 [MB] (161 MBps) [2024-11-06T07:53:14.719Z] Copying: 798/1024 [MB] (159 MBps) [2024-11-06T07:53:14.978Z] Copying: 955/1024 [MB] (157 MBps) [2024-11-06T07:53:19.165Z] Copying: 1024/1024 [MB] (average 159 MBps) 00:16:56.536 00:16:56.536 07:53:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:16:56.536 07:53:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:16:56.536 07:53:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:16:56.536 07:53:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:16:56.794 { 00:16:56.794 "subsystems": [ 00:16:56.794 { 00:16:56.794 "subsystem": "bdev", 00:16:56.794 "config": [ 00:16:56.794 { 00:16:56.794 "params": { 00:16:56.794 "block_size": 512, 00:16:56.794 "num_blocks": 2097152, 00:16:56.794 "name": "malloc0" 00:16:56.794 }, 00:16:56.794 "method": "bdev_malloc_create" 00:16:56.794 }, 00:16:56.794 { 00:16:56.794 "params": { 00:16:56.794 "io_mechanism": "libaio", 00:16:56.794 "filename": "/dev/nullb0", 00:16:56.794 "name": "null0" 00:16:56.794 }, 00:16:56.794 "method": "bdev_xnvme_create" 00:16:56.794 }, 00:16:56.794 { 00:16:56.794 "method": "bdev_wait_for_examine" 00:16:56.794 } 00:16:56.794 ] 00:16:56.794 } 00:16:56.794 ] 00:16:56.794 } 00:16:56.794 [2024-11-06 07:53:19.270746] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:16:56.794 [2024-11-06 07:53:19.271311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70442 ] 00:16:57.057 [2024-11-06 07:53:19.465763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.057 [2024-11-06 07:53:19.647619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.338  [2024-11-06T07:53:23.534Z] Copying: 163/1024 [MB] (163 MBps) [2024-11-06T07:53:24.490Z] Copying: 329/1024 [MB] (165 MBps) [2024-11-06T07:53:25.426Z] Copying: 493/1024 [MB] (164 MBps) [2024-11-06T07:53:26.362Z] Copying: 658/1024 [MB] (164 MBps) [2024-11-06T07:53:27.302Z] Copying: 824/1024 [MB] (165 MBps) [2024-11-06T07:53:27.560Z] Copying: 989/1024 [MB] (165 MBps) [2024-11-06T07:53:31.747Z] Copying: 1024/1024 [MB] (average 164 MBps) 00:17:09.118 00:17:09.118 07:53:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:17:09.118 07:53:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:09.118 07:53:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:17:09.118 07:53:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:17:09.118 07:53:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:09.118 07:53:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:09.118 { 00:17:09.118 "subsystems": [ 00:17:09.118 { 00:17:09.118 "subsystem": "bdev", 00:17:09.118 "config": [ 00:17:09.118 { 00:17:09.118 "params": { 00:17:09.118 "block_size": 512, 00:17:09.118 "num_blocks": 2097152, 00:17:09.118 "name": "malloc0" 00:17:09.118 }, 00:17:09.118 "method": "bdev_malloc_create" 00:17:09.118 }, 00:17:09.118 { 00:17:09.118 "params": { 00:17:09.118 "io_mechanism": "io_uring", 00:17:09.118 "filename": "/dev/nullb0", 00:17:09.118 "name": "null0" 00:17:09.118 }, 00:17:09.118 "method": "bdev_xnvme_create" 00:17:09.118 }, 00:17:09.118 { 00:17:09.118 "method": "bdev_wait_for_examine" 00:17:09.118 } 00:17:09.118 ] 00:17:09.118 } 00:17:09.118 ] 00:17:09.118 } 00:17:09.118 [2024-11-06 07:53:31.652361] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:17:09.118 [2024-11-06 07:53:31.652879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70580 ] 00:17:09.376 [2024-11-06 07:53:31.835569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.376 [2024-11-06 07:53:31.984502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.657  [2024-11-06T07:53:35.544Z] Copying: 169/1024 [MB] (169 MBps) [2024-11-06T07:53:36.920Z] Copying: 337/1024 [MB] (168 MBps) [2024-11-06T07:53:37.852Z] Copying: 506/1024 [MB] (168 MBps) [2024-11-06T07:53:38.787Z] Copying: 675/1024 [MB] (168 MBps) [2024-11-06T07:53:39.722Z] Copying: 845/1024 [MB] (170 MBps) [2024-11-06T07:53:39.722Z] Copying: 1007/1024 [MB] (162 MBps) [2024-11-06T07:53:43.914Z] Copying: 1024/1024 [MB] (average 167 MBps) 00:17:21.285 00:17:21.285 07:53:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:17:21.285 07:53:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:17:21.285 07:53:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:17:21.285 07:53:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:21.285 { 00:17:21.285 "subsystems": [ 00:17:21.285 { 00:17:21.285 "subsystem": "bdev", 00:17:21.285 "config": [ 00:17:21.285 { 00:17:21.285 "params": { 00:17:21.285 "block_size": 512, 00:17:21.285 "num_blocks": 2097152, 00:17:21.285 "name": "malloc0" 00:17:21.285 }, 00:17:21.285 "method": "bdev_malloc_create" 00:17:21.285 }, 00:17:21.285 { 00:17:21.285 "params": { 00:17:21.285 "io_mechanism": "io_uring", 00:17:21.285 "filename": "/dev/nullb0", 00:17:21.285 "name": "null0" 00:17:21.285 }, 00:17:21.285 "method": "bdev_xnvme_create" 00:17:21.285 }, 00:17:21.285 { 00:17:21.285 "method": "bdev_wait_for_examine" 00:17:21.285 } 00:17:21.285 ] 00:17:21.285 } 00:17:21.285 ] 00:17:21.285 } 00:17:21.285 [2024-11-06 07:53:43.489835] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:17:21.285 [2024-11-06 07:53:43.490053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70708 ] 00:17:21.285 [2024-11-06 07:53:43.675218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.285 [2024-11-06 07:53:43.825226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.815  [2024-11-06T07:53:47.379Z] Copying: 173/1024 [MB] (173 MBps) [2024-11-06T07:53:48.313Z] Copying: 349/1024 [MB] (175 MBps) [2024-11-06T07:53:49.270Z] Copying: 524/1024 [MB] (174 MBps) [2024-11-06T07:53:50.643Z] Copying: 698/1024 [MB] (174 MBps) [2024-11-06T07:53:51.210Z] Copying: 874/1024 [MB] (176 MBps) [2024-11-06T07:53:55.398Z] Copying: 1024/1024 [MB] (average 174 MBps) 00:17:32.769 00:17:32.769 07:53:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:17:32.769 07:53:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:17:32.769 00:17:32.769 real 0m48.446s 00:17:32.769 user 0m41.907s 00:17:32.769 sys 0m5.888s 00:17:32.769 07:53:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.769 ************************************ 00:17:32.769 END TEST xnvme_to_malloc_dd_copy 00:17:32.769 ************************************ 00:17:32.769 07:53:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:17:32.769 07:53:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:32.769 07:53:54 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:32.769 07:53:54 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.769 07:53:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:32.769 ************************************ 00:17:32.769 START TEST xnvme_bdevperf 00:17:32.769 ************************************ 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:17:32.769 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:17:32.770 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:17:32.770 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:17:32.770 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:17:32.770 07:53:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:17:32.770 07:53:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:32.770 07:53:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:32.770 { 00:17:32.770 "subsystems": [ 00:17:32.770 { 00:17:32.770 "subsystem": "bdev", 00:17:32.770 "config": [ 00:17:32.770 { 00:17:32.770 "params": { 00:17:32.770 "io_mechanism": "libaio", 00:17:32.770 "filename": "/dev/nullb0", 00:17:32.770 "name": "null0" 00:17:32.770 }, 00:17:32.770 "method": "bdev_xnvme_create" 00:17:32.770 }, 00:17:32.770 { 00:17:32.770 "method": "bdev_wait_for_examine" 00:17:32.770 } 00:17:32.770 ] 00:17:32.770 } 00:17:32.770 ] 00:17:32.770 } 00:17:32.770 [2024-11-06 07:53:54.964455] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:17:32.770 [2024-11-06 07:53:54.964632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70862 ] 00:17:32.770 [2024-11-06 07:53:55.142377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.770 [2024-11-06 07:53:55.302930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.336 Running I/O for 5 seconds... 00:17:35.205 115968.00 IOPS, 453.00 MiB/s [2024-11-06T07:53:58.783Z] 115552.00 IOPS, 451.38 MiB/s [2024-11-06T07:53:59.728Z] 115370.67 IOPS, 450.67 MiB/s [2024-11-06T07:54:01.101Z] 115488.00 IOPS, 451.12 MiB/s 00:17:38.472 Latency(us) 00:17:38.472 [2024-11-06T07:54:01.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.472 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:38.472 null0 : 5.00 116140.35 453.67 0.00 0.00 547.57 163.84 3708.74 00:17:38.472 [2024-11-06T07:54:01.101Z] =================================================================================================================== 00:17:38.472 [2024-11-06T07:54:01.101Z] Total : 116140.35 453.67 0.00 0.00 547.57 163.84 3708.74 00:17:39.404 07:54:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:17:39.404 07:54:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:39.404 07:54:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:17:39.404 07:54:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:17:39.404 07:54:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:39.404 07:54:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:39.404 { 00:17:39.404 "subsystems": [ 00:17:39.404 { 00:17:39.404 "subsystem": "bdev", 00:17:39.404 "config": [ 00:17:39.404 { 00:17:39.404 "params": { 00:17:39.404 "io_mechanism": "io_uring", 00:17:39.404 "filename": "/dev/nullb0", 00:17:39.404 "name": "null0" 00:17:39.404 }, 00:17:39.404 "method": "bdev_xnvme_create" 00:17:39.404 }, 00:17:39.404 { 00:17:39.404 "method": "bdev_wait_for_examine" 00:17:39.404 } 00:17:39.404 ] 00:17:39.404 } 00:17:39.404 ] 00:17:39.404 } 00:17:39.404 [2024-11-06 07:54:01.871094] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:17:39.404 [2024-11-06 07:54:01.871339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70936 ] 00:17:39.662 [2024-11-06 07:54:02.058928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.662 [2024-11-06 07:54:02.196072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.229 Running I/O for 5 seconds... 00:17:42.099 152576.00 IOPS, 596.00 MiB/s [2024-11-06T07:54:05.674Z] 152384.00 IOPS, 595.25 MiB/s [2024-11-06T07:54:06.607Z] 151914.67 IOPS, 593.42 MiB/s [2024-11-06T07:54:07.981Z] 151472.00 IOPS, 591.69 MiB/s 00:17:45.352 Latency(us) 00:17:45.352 [2024-11-06T07:54:07.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.352 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:45.352 null0 : 5.00 151310.19 591.06 0.00 0.00 419.60 236.45 3991.74 00:17:45.352 [2024-11-06T07:54:07.981Z] =================================================================================================================== 00:17:45.352 [2024-11-06T07:54:07.981Z] Total : 151310.19 591.06 0.00 0.00 419.60 236.45 3991.74 00:17:46.287 07:54:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:17:46.287 07:54:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:17:46.287 ************************************ 00:17:46.287 END TEST xnvme_bdevperf 00:17:46.287 ************************************ 00:17:46.287 00:17:46.287 real 0m13.819s 00:17:46.287 user 0m10.854s 00:17:46.287 sys 0m2.735s 00:17:46.287 07:54:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.287 07:54:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 ************************************ 00:17:46.287 END TEST nvme_xnvme 00:17:46.287 ************************************ 00:17:46.287 00:17:46.287 real 1m2.580s 00:17:46.287 user 0m52.924s 00:17:46.287 sys 0m8.774s 00:17:46.287 07:54:08 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.287 07:54:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 07:54:08 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:46.287 07:54:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:46.287 07:54:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.287 07:54:08 -- common/autotest_common.sh@10 -- # set +x 00:17:46.287 ************************************ 00:17:46.287 START TEST blockdev_xnvme 00:17:46.287 ************************************ 00:17:46.287 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:46.287 * Looking for test storage... 00:17:46.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:46.287 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:46.288 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1689 -- # lcov --version 00:17:46.288 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:46.546 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:46.546 07:54:08 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.546 07:54:08 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.546 07:54:08 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.546 07:54:08 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.547 07:54:08 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:17:46.547 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.547 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:46.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.547 --rc genhtml_branch_coverage=1 00:17:46.547 --rc genhtml_function_coverage=1 00:17:46.547 --rc genhtml_legend=1 00:17:46.547 --rc geninfo_all_blocks=1 00:17:46.547 --rc geninfo_unexecuted_blocks=1 00:17:46.547 00:17:46.547 ' 00:17:46.547 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:46.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.547 --rc genhtml_branch_coverage=1 00:17:46.547 --rc genhtml_function_coverage=1 00:17:46.547 --rc genhtml_legend=1 00:17:46.547 --rc geninfo_all_blocks=1 00:17:46.547 --rc geninfo_unexecuted_blocks=1 00:17:46.547 00:17:46.547 ' 00:17:46.547 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:46.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.547 --rc genhtml_branch_coverage=1 00:17:46.547 --rc genhtml_function_coverage=1 00:17:46.547 --rc genhtml_legend=1 00:17:46.547 --rc geninfo_all_blocks=1 00:17:46.547 --rc geninfo_unexecuted_blocks=1 00:17:46.547 00:17:46.547 ' 00:17:46.547 07:54:08 blockdev_xnvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:46.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.547 --rc genhtml_branch_coverage=1 00:17:46.547 --rc genhtml_function_coverage=1 00:17:46.547 --rc genhtml_legend=1 00:17:46.547 --rc geninfo_all_blocks=1 00:17:46.547 --rc geninfo_unexecuted_blocks=1 00:17:46.547 00:17:46.547 ' 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:17:46.547 07:54:08 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71084 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:46.547 07:54:09 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71084 00:17:46.547 07:54:09 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 71084 ']' 00:17:46.547 07:54:09 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.547 07:54:09 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.547 07:54:09 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.547 07:54:09 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.547 07:54:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.547 [2024-11-06 07:54:09.141098] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:17:46.547 [2024-11-06 07:54:09.141504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71084 ] 00:17:46.806 [2024-11-06 07:54:09.334750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.064 [2024-11-06 07:54:09.492064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.037 07:54:10 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.037 07:54:10 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:17:48.037 07:54:10 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:17:48.037 07:54:10 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:17:48.037 07:54:10 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:17:48.037 07:54:10 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:17:48.037 07:54:10 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:48.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:48.553 Waiting for block devices as requested 00:17:48.553 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:48.811 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:48.811 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:48.811 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:54.107 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:54.107 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:54.107 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:17:54.107 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1654 -- # local nvme bdf 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:17:54.108 nvme0n1 00:17:54.108 nvme1n1 00:17:54.108 nvme2n1 00:17:54.108 nvme2n2 00:17:54.108 nvme2n3 00:17:54.108 nvme3n1 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 07:54:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:17:54.109 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c67b8938-1241-4699-b3de-ce7b8a17b173"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c67b8938-1241-4699-b3de-ce7b8a17b173",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "424a69ff-c6fd-4ab9-b5c4-01021d6a762d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "424a69ff-c6fd-4ab9-b5c4-01021d6a762d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b4dfc2fc-644d-4d41-a887-f8a5e70eeb46"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b4dfc2fc-644d-4d41-a887-f8a5e70eeb46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "8864fcd9-44f7-45b3-abbe-aa03d9622a57"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8864fcd9-44f7-45b3-abbe-aa03d9622a57",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "4e0fe1fb-aa52-4583-9181-35f698b783c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4e0fe1fb-aa52-4583-9181-35f698b783c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e722aef1-783b-4511-8b3d-6f809139e5bb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e722aef1-783b-4511-8b3d-6f809139e5bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:54.109 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:17:54.367 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:17:54.367 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:17:54.367 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:17:54.367 07:54:16 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71084 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 71084 ']' 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 71084 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71084 00:17:54.367 killing process with pid 71084 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71084' 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 71084 00:17:54.367 07:54:16 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 71084 00:17:56.915 07:54:19 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:56.915 07:54:19 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:56.915 07:54:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:56.915 07:54:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.915 07:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:56.915 ************************************ 00:17:56.915 START TEST bdev_hello_world 00:17:56.915 ************************************ 00:17:56.915 07:54:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:56.915 [2024-11-06 07:54:19.437939] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:17:56.915 [2024-11-06 07:54:19.438135] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71460 ] 00:17:57.178 [2024-11-06 07:54:19.627847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.178 [2024-11-06 07:54:19.780091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.746 [2024-11-06 07:54:20.277195] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:57.746 [2024-11-06 07:54:20.277306] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:57.746 [2024-11-06 07:54:20.277339] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:57.746 [2024-11-06 07:54:20.280116] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:57.746 [2024-11-06 07:54:20.280731] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:57.746 [2024-11-06 07:54:20.280785] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:57.746 [2024-11-06 07:54:20.280985] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:57.746 00:17:57.746 [2024-11-06 07:54:20.281027] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:59.125 ************************************ 00:17:59.125 END TEST bdev_hello_world 00:17:59.125 ************************************ 00:17:59.125 00:17:59.125 real 0m2.118s 00:17:59.125 user 0m1.648s 00:17:59.125 sys 0m0.348s 00:17:59.125 07:54:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:59.125 07:54:21 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:59.125 07:54:21 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:17:59.125 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:59.125 07:54:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.125 07:54:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:59.125 ************************************ 00:17:59.125 START TEST bdev_bounds 00:17:59.125 ************************************ 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71508 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71508' 00:17:59.125 Process bdevio pid: 71508 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71508 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71508 ']' 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.125 07:54:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:59.125 [2024-11-06 07:54:21.601456] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:17:59.125 [2024-11-06 07:54:21.601666] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71508 ] 00:17:59.383 [2024-11-06 07:54:21.783374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:59.383 [2024-11-06 07:54:21.941380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.383 [2024-11-06 07:54:21.941430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.383 [2024-11-06 07:54:21.941437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.335 07:54:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.335 07:54:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:18:00.335 07:54:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:00.335 I/O targets: 00:18:00.335 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:00.335 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:00.335 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:00.335 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:00.335 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:00.335 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:00.335 00:18:00.335 00:18:00.335 CUnit - A unit testing framework for C - Version 2.1-3 00:18:00.335 http://cunit.sourceforge.net/ 00:18:00.335 00:18:00.335 00:18:00.335 Suite: bdevio tests on: nvme3n1 00:18:00.335 Test: blockdev write read block ...passed 00:18:00.335 Test: blockdev write zeroes read block ...passed 00:18:00.335 Test: blockdev write zeroes read no split ...passed 00:18:00.335 Test: blockdev write zeroes read split ...passed 00:18:00.335 Test: blockdev write zeroes read split partial ...passed 00:18:00.335 Test: blockdev reset ...passed 00:18:00.335 Test: blockdev write read 8 blocks ...passed 00:18:00.335 Test: blockdev write read size > 128k ...passed 00:18:00.335 Test: blockdev write read invalid size ...passed 00:18:00.335 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:00.335 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:00.335 Test: blockdev write read max offset ...passed 00:18:00.335 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:00.335 Test: blockdev writev readv 8 blocks ...passed 00:18:00.335 Test: blockdev writev readv 30 x 1block ...passed 00:18:00.335 Test: blockdev writev readv block ...passed 00:18:00.335 Test: blockdev writev readv size > 128k ...passed 00:18:00.335 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:00.335 Test: blockdev comparev and writev ...passed 00:18:00.335 Test: blockdev nvme passthru rw ...passed 00:18:00.335 Test: blockdev nvme passthru vendor specific ...passed 00:18:00.335 Test: blockdev nvme admin passthru ...passed 00:18:00.335 Test: blockdev copy ...passed 00:18:00.335 Suite: bdevio tests on: nvme2n3 00:18:00.335 Test: blockdev write read block ...passed 00:18:00.335 Test: blockdev write zeroes read block ...passed 00:18:00.335 Test: blockdev write zeroes read no split ...passed 00:18:00.335 Test: blockdev write zeroes read split ...passed 00:18:00.335 Test: blockdev write zeroes read split partial ...passed 00:18:00.335 Test: blockdev reset ...passed 00:18:00.335 Test: blockdev write read 8 blocks ...passed 00:18:00.335 Test: blockdev write read size > 128k ...passed 00:18:00.335 Test: blockdev write read invalid size ...passed 00:18:00.335 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:00.335 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:00.335 Test: blockdev write read max offset ...passed 00:18:00.335 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:00.335 Test: blockdev writev readv 8 blocks ...passed 00:18:00.335 Test: blockdev writev readv 30 x 1block ...passed 00:18:00.335 Test: blockdev writev readv block ...passed 00:18:00.335 Test: blockdev writev readv size > 128k ...passed 00:18:00.335 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:00.335 Test: blockdev comparev and writev ...passed 00:18:00.335 Test: blockdev nvme passthru rw ...passed 00:18:00.335 Test: blockdev nvme passthru vendor specific ...passed 00:18:00.335 Test: blockdev nvme admin passthru ...passed 00:18:00.335 Test: blockdev copy ...passed 00:18:00.335 Suite: bdevio tests on: nvme2n2 00:18:00.335 Test: blockdev write read block ...passed 00:18:00.335 Test: blockdev write zeroes read block ...passed 00:18:00.335 Test: blockdev write zeroes read no split ...passed 00:18:00.336 Test: blockdev write zeroes read split ...passed 00:18:00.336 Test: blockdev write zeroes read split partial ...passed 00:18:00.336 Test: blockdev reset ...passed 00:18:00.336 Test: blockdev write read 8 blocks ...passed 00:18:00.336 Test: blockdev write read size > 128k ...passed 00:18:00.336 Test: blockdev write read invalid size ...passed 00:18:00.336 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:00.336 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:00.336 Test: blockdev write read max offset ...passed 00:18:00.336 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:00.336 Test: blockdev writev readv 8 blocks ...passed 00:18:00.336 Test: blockdev writev readv 30 x 1block ...passed 00:18:00.336 Test: blockdev writev readv block ...passed 00:18:00.336 Test: blockdev writev readv size > 128k ...passed 00:18:00.336 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:00.336 Test: blockdev comparev and writev ...passed 00:18:00.336 Test: blockdev nvme passthru rw ...passed 00:18:00.336 Test: blockdev nvme passthru vendor specific ...passed 00:18:00.336 Test: blockdev nvme admin passthru ...passed 00:18:00.336 Test: blockdev copy ...passed 00:18:00.336 Suite: bdevio tests on: nvme2n1 00:18:00.336 Test: blockdev write read block ...passed 00:18:00.336 Test: blockdev write zeroes read block ...passed 00:18:00.336 Test: blockdev write zeroes read no split ...passed 00:18:00.594 Test: blockdev write zeroes read split ...passed 00:18:00.594 Test: blockdev write zeroes read split partial ...passed 00:18:00.594 Test: blockdev reset ...passed 00:18:00.594 Test: blockdev write read 8 blocks ...passed 00:18:00.594 Test: blockdev write read size > 128k ...passed 00:18:00.594 Test: blockdev write read invalid size ...passed 00:18:00.594 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:00.594 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:00.594 Test: blockdev write read max offset ...passed 00:18:00.594 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:00.594 Test: blockdev writev readv 8 blocks ...passed 00:18:00.594 Test: blockdev writev readv 30 x 1block ...passed 00:18:00.594 Test: blockdev writev readv block ...passed 00:18:00.594 Test: blockdev writev readv size > 128k ...passed 00:18:00.594 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:00.594 Test: blockdev comparev and writev ...passed 00:18:00.594 Test: blockdev nvme passthru rw ...passed 00:18:00.594 Test: blockdev nvme passthru vendor specific ...passed 00:18:00.594 Test: blockdev nvme admin passthru ...passed 00:18:00.594 Test: blockdev copy ...passed 00:18:00.594 Suite: bdevio tests on: nvme1n1 00:18:00.594 Test: blockdev write read block ...passed 00:18:00.594 Test: blockdev write zeroes read block ...passed 00:18:00.594 Test: blockdev write zeroes read no split ...passed 00:18:00.594 Test: blockdev write zeroes read split ...passed 00:18:00.594 Test: blockdev write zeroes read split partial ...passed 00:18:00.594 Test: blockdev reset ...passed 00:18:00.594 Test: blockdev write read 8 blocks ...passed 00:18:00.594 Test: blockdev write read size > 128k ...passed 00:18:00.594 Test: blockdev write read invalid size ...passed 00:18:00.594 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:00.594 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:00.594 Test: blockdev write read max offset ...passed 00:18:00.594 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:00.594 Test: blockdev writev readv 8 blocks ...passed 00:18:00.594 Test: blockdev writev readv 30 x 1block ...passed 00:18:00.594 Test: blockdev writev readv block ...passed 00:18:00.594 Test: blockdev writev readv size > 128k ...passed 00:18:00.594 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:00.594 Test: blockdev comparev and writev ...passed 00:18:00.594 Test: blockdev nvme passthru rw ...passed 00:18:00.594 Test: blockdev nvme passthru vendor specific ...passed 00:18:00.594 Test: blockdev nvme admin passthru ...passed 00:18:00.594 Test: blockdev copy ...passed 00:18:00.594 Suite: bdevio tests on: nvme0n1 00:18:00.594 Test: blockdev write read block ...passed 00:18:00.594 Test: blockdev write zeroes read block ...passed 00:18:00.594 Test: blockdev write zeroes read no split ...passed 00:18:00.594 Test: blockdev write zeroes read split ...passed 00:18:00.594 Test: blockdev write zeroes read split partial ...passed 00:18:00.594 Test: blockdev reset ...passed 00:18:00.595 Test: blockdev write read 8 blocks ...passed 00:18:00.595 Test: blockdev write read size > 128k ...passed 00:18:00.595 Test: blockdev write read invalid size ...passed 00:18:00.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:00.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:00.595 Test: blockdev write read max offset ...passed 00:18:00.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:00.595 Test: blockdev writev readv 8 blocks ...passed 00:18:00.595 Test: blockdev writev readv 30 x 1block ...passed 00:18:00.595 Test: blockdev writev readv block ...passed 00:18:00.595 Test: blockdev writev readv size > 128k ...passed 00:18:00.595 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:00.595 Test: blockdev comparev and writev ...passed 00:18:00.595 Test: blockdev nvme passthru rw ...passed 00:18:00.595 Test: blockdev nvme passthru vendor specific ...passed 00:18:00.595 Test: blockdev nvme admin passthru ...passed 00:18:00.595 Test: blockdev copy ...passed 00:18:00.595 00:18:00.595 Run Summary: Type Total Ran Passed Failed Inactive 00:18:00.595 suites 6 6 n/a 0 0 00:18:00.595 tests 138 138 138 0 0 00:18:00.595 asserts 780 780 780 0 n/a 00:18:00.595 00:18:00.595 Elapsed time = 1.295 seconds 00:18:00.595 0 00:18:00.595 07:54:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71508 00:18:00.595 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71508 ']' 00:18:00.595 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71508 00:18:00.595 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:18:00.595 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.595 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71508 00:18:00.853 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.853 killing process with pid 71508 00:18:00.853 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.853 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71508' 00:18:00.853 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71508 00:18:00.853 07:54:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71508 00:18:01.788 07:54:24 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:01.788 00:18:01.788 real 0m2.821s 00:18:01.788 user 0m6.967s 00:18:01.788 sys 0m0.510s 00:18:01.788 ************************************ 00:18:01.788 END TEST bdev_bounds 00:18:01.788 ************************************ 00:18:01.788 07:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.788 07:54:24 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:01.788 07:54:24 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:18:01.788 07:54:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:01.788 07:54:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.788 07:54:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:01.788 ************************************ 00:18:01.788 START TEST bdev_nbd 00:18:01.788 ************************************ 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71566 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71566 /var/tmp/spdk-nbd.sock 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71566 ']' 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.788 07:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:02.046 [2024-11-06 07:54:24.509548] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:02.046 [2024-11-06 07:54:24.509749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.305 [2024-11-06 07:54:24.704063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.305 [2024-11-06 07:54:24.835224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:02.871 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.437 1+0 records in 00:18:03.437 1+0 records out 00:18:03.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629858 s, 6.5 MB/s 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:03.437 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:03.438 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:03.438 07:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.696 1+0 records in 00:18:03.696 1+0 records out 00:18:03.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00093036 s, 4.4 MB/s 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:03.696 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:03.955 1+0 records in 00:18:03.955 1+0 records out 00:18:03.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657055 s, 6.2 MB/s 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:03.955 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.213 1+0 records in 00:18:04.213 1+0 records out 00:18:04.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805989 s, 5.1 MB/s 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:04.213 07:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.780 1+0 records in 00:18:04.780 1+0 records out 00:18:04.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927892 s, 4.4 MB/s 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:04.780 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:04.781 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.039 1+0 records in 00:18:05.039 1+0 records out 00:18:05.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752717 s, 5.4 MB/s 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:05.039 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd0", 00:18:05.299 "bdev_name": "nvme0n1" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd1", 00:18:05.299 "bdev_name": "nvme1n1" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd2", 00:18:05.299 "bdev_name": "nvme2n1" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd3", 00:18:05.299 "bdev_name": "nvme2n2" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd4", 00:18:05.299 "bdev_name": "nvme2n3" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd5", 00:18:05.299 "bdev_name": "nvme3n1" 00:18:05.299 } 00:18:05.299 ]' 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd0", 00:18:05.299 "bdev_name": "nvme0n1" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd1", 00:18:05.299 "bdev_name": "nvme1n1" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd2", 00:18:05.299 "bdev_name": "nvme2n1" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd3", 00:18:05.299 "bdev_name": "nvme2n2" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd4", 00:18:05.299 "bdev_name": "nvme2n3" 00:18:05.299 }, 00:18:05.299 { 00:18:05.299 "nbd_device": "/dev/nbd5", 00:18:05.299 "bdev_name": "nvme3n1" 00:18:05.299 } 00:18:05.299 ]' 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.299 07:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.558 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:05.817 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:05.817 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:05.817 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:05.817 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.817 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.817 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:06.075 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:06.075 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.075 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.075 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.333 07:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.592 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.851 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.111 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:07.371 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:07.371 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:07.371 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:07.371 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:07.371 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:07.371 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:07.629 07:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:07.629 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:07.630 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:07.888 /dev/nbd0 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.889 1+0 records in 00:18:07.889 1+0 records out 00:18:07.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633674 s, 6.5 MB/s 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:07.889 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:18:08.148 /dev/nbd1 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.148 1+0 records in 00:18:08.148 1+0 records out 00:18:08.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000917982 s, 4.5 MB/s 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:08.148 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:18:08.407 /dev/nbd10 00:18:08.407 07:54:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.407 1+0 records in 00:18:08.407 1+0 records out 00:18:08.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872245 s, 4.7 MB/s 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:08.407 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:18:08.666 /dev/nbd11 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.924 1+0 records in 00:18:08.924 1+0 records out 00:18:08.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715113 s, 5.7 MB/s 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:08.924 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:18:09.182 /dev/nbd12 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.182 1+0 records in 00:18:09.182 1+0 records out 00:18:09.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838002 s, 4.9 MB/s 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:09.182 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:09.441 /dev/nbd13 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.441 1+0 records in 00:18:09.441 1+0 records out 00:18:09.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736058 s, 5.6 MB/s 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:09.441 07:54:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:09.700 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd0", 00:18:09.700 "bdev_name": "nvme0n1" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd1", 00:18:09.700 "bdev_name": "nvme1n1" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd10", 00:18:09.700 "bdev_name": "nvme2n1" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd11", 00:18:09.700 "bdev_name": "nvme2n2" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd12", 00:18:09.700 "bdev_name": "nvme2n3" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd13", 00:18:09.700 "bdev_name": "nvme3n1" 00:18:09.700 } 00:18:09.700 ]' 00:18:09.700 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd0", 00:18:09.700 "bdev_name": "nvme0n1" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd1", 00:18:09.700 "bdev_name": "nvme1n1" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd10", 00:18:09.700 "bdev_name": "nvme2n1" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd11", 00:18:09.700 "bdev_name": "nvme2n2" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd12", 00:18:09.700 "bdev_name": "nvme2n3" 00:18:09.700 }, 00:18:09.700 { 00:18:09.700 "nbd_device": "/dev/nbd13", 00:18:09.700 "bdev_name": "nvme3n1" 00:18:09.700 } 00:18:09.700 ]' 00:18:09.700 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:09.971 /dev/nbd1 00:18:09.971 /dev/nbd10 00:18:09.971 /dev/nbd11 00:18:09.971 /dev/nbd12 00:18:09.971 /dev/nbd13' 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:09.971 /dev/nbd1 00:18:09.971 /dev/nbd10 00:18:09.971 /dev/nbd11 00:18:09.971 /dev/nbd12 00:18:09.971 /dev/nbd13' 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:09.971 256+0 records in 00:18:09.971 256+0 records out 00:18:09.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107837 s, 97.2 MB/s 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:09.971 256+0 records in 00:18:09.971 256+0 records out 00:18:09.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160152 s, 6.5 MB/s 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:09.971 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:10.252 256+0 records in 00:18:10.252 256+0 records out 00:18:10.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203995 s, 5.1 MB/s 00:18:10.252 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:10.252 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:10.510 256+0 records in 00:18:10.510 256+0 records out 00:18:10.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163378 s, 6.4 MB/s 00:18:10.510 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:10.510 07:54:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:10.510 256+0 records in 00:18:10.510 256+0 records out 00:18:10.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132962 s, 7.9 MB/s 00:18:10.510 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:10.510 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:10.770 256+0 records in 00:18:10.770 256+0 records out 00:18:10.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164716 s, 6.4 MB/s 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:10.770 256+0 records in 00:18:10.770 256+0 records out 00:18:10.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137225 s, 7.6 MB/s 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:10.770 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:11.029 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:11.029 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:11.029 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:11.029 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:11.029 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:11.029 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:11.029 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.029 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.287 07:54:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.545 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.804 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.062 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:12.321 07:54:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:12.889 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:13.148 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:13.406 malloc_lvol_verify 00:18:13.406 07:54:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:13.664 f6d9ca30-fd15-40ad-a8bb-4cdaee1a9e04 00:18:13.664 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:14.230 17621f8d-508a-496e-81a1-48463512252a 00:18:14.230 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:14.230 /dev/nbd0 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:14.489 mke2fs 1.47.0 (5-Feb-2023) 00:18:14.489 Discarding device blocks: 0/4096 done 00:18:14.489 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:14.489 00:18:14.489 Allocating group tables: 0/1 done 00:18:14.489 Writing inode tables: 0/1 done 00:18:14.489 Creating journal (1024 blocks): done 00:18:14.489 Writing superblocks and filesystem accounting information: 0/1 done 00:18:14.489 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:14.489 07:54:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71566 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71566 ']' 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71566 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71566 00:18:14.747 killing process with pid 71566 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71566' 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71566 00:18:14.747 07:54:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71566 00:18:16.124 ************************************ 00:18:16.124 END TEST bdev_nbd 00:18:16.124 ************************************ 00:18:16.124 07:54:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:16.124 00:18:16.124 real 0m14.148s 00:18:16.124 user 0m20.097s 00:18:16.124 sys 0m4.638s 00:18:16.124 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.124 07:54:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:16.124 07:54:38 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:16.124 07:54:38 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:18:16.124 07:54:38 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:18:16.124 07:54:38 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:16.124 07:54:38 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:16.124 07:54:38 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.124 07:54:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:16.124 ************************************ 00:18:16.124 START TEST bdev_fio 00:18:16.124 ************************************ 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:16.124 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:18:16.124 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:16.125 ************************************ 00:18:16.125 START TEST bdev_fio_rw_verify 00:18:16.125 ************************************ 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:16.125 07:54:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:16.383 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:16.383 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:16.383 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:16.383 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:16.383 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:16.383 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:16.383 fio-3.35 00:18:16.383 Starting 6 threads 00:18:28.624 00:18:28.624 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72011: Wed Nov 6 07:54:49 2024 00:18:28.624 read: IOPS=28.5k, BW=111MiB/s (117MB/s)(1113MiB/10001msec) 00:18:28.624 slat (usec): min=3, max=784, avg= 8.29, stdev= 6.24 00:18:28.624 clat (usec): min=103, max=4670, avg=632.99, stdev=258.82 00:18:28.624 lat (usec): min=111, max=4677, avg=641.28, stdev=259.76 00:18:28.624 clat percentiles (usec): 00:18:28.624 | 50.000th=[ 627], 99.000th=[ 1270], 99.900th=[ 1827], 99.990th=[ 3785], 00:18:28.624 | 99.999th=[ 4686] 00:18:28.624 write: IOPS=28.9k, BW=113MiB/s (118MB/s)(1128MiB/10001msec); 0 zone resets 00:18:28.624 slat (usec): min=8, max=2770, avg=29.31, stdev=37.03 00:18:28.624 clat (usec): min=96, max=9460, avg=750.52, stdev=274.23 00:18:28.624 lat (usec): min=116, max=9489, avg=779.83, stdev=277.63 00:18:28.624 clat percentiles (usec): 00:18:28.624 | 50.000th=[ 742], 99.000th=[ 1500], 99.900th=[ 2024], 99.990th=[ 3523], 00:18:28.624 | 99.999th=[ 9372] 00:18:28.624 bw ( KiB/s): min=97023, max=140568, per=99.68%, avg=115122.47, stdev=2350.98, samples=114 00:18:28.624 iops : min=24255, max=35142, avg=28780.37, stdev=587.73, samples=114 00:18:28.624 lat (usec) : 100=0.01%, 250=3.27%, 500=22.07%, 750=33.77%, 1000=29.87% 00:18:28.624 lat (msec) : 2=10.91%, 4=0.09%, 10=0.01% 00:18:28.624 cpu : usr=56.32%, sys=27.36%, ctx=8056, majf=0, minf=24380 00:18:28.624 IO depths : 1=11.7%, 2=24.2%, 4=50.8%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.624 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.624 issued rwts: total=284953,288753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.624 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:28.624 00:18:28.624 Run status group 0 (all jobs): 00:18:28.624 READ: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=1113MiB (1167MB), run=10001-10001msec 00:18:28.624 WRITE: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=1128MiB (1183MB), run=10001-10001msec 00:18:28.882 ----------------------------------------------------- 00:18:28.882 Suppressions used: 00:18:28.882 count bytes template 00:18:28.882 6 48 /usr/src/fio/parse.c 00:18:28.882 3597 345312 /usr/src/fio/iolog.c 00:18:28.882 1 8 libtcmalloc_minimal.so 00:18:28.882 1 904 libcrypto.so 00:18:28.882 ----------------------------------------------------- 00:18:28.882 00:18:28.882 00:18:28.882 real 0m12.715s 00:18:28.882 user 0m35.918s 00:18:28.882 sys 0m16.877s 00:18:28.882 07:54:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.882 07:54:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:28.882 ************************************ 00:18:28.882 END TEST bdev_fio_rw_verify 00:18:28.882 ************************************ 00:18:28.882 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:28.882 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:28.882 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c67b8938-1241-4699-b3de-ce7b8a17b173"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c67b8938-1241-4699-b3de-ce7b8a17b173",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "424a69ff-c6fd-4ab9-b5c4-01021d6a762d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "424a69ff-c6fd-4ab9-b5c4-01021d6a762d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b4dfc2fc-644d-4d41-a887-f8a5e70eeb46"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b4dfc2fc-644d-4d41-a887-f8a5e70eeb46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "8864fcd9-44f7-45b3-abbe-aa03d9622a57"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8864fcd9-44f7-45b3-abbe-aa03d9622a57",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "4e0fe1fb-aa52-4583-9181-35f698b783c6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4e0fe1fb-aa52-4583-9181-35f698b783c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e722aef1-783b-4511-8b3d-6f809139e5bb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e722aef1-783b-4511-8b3d-6f809139e5bb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:28.883 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:29.141 /home/vagrant/spdk_repo/spdk 00:18:29.141 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:29.141 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:29.141 07:54:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:29.141 00:18:29.141 real 0m12.928s 00:18:29.141 user 0m36.022s 00:18:29.141 sys 0m16.977s 00:18:29.141 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.141 07:54:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:29.141 ************************************ 00:18:29.141 END TEST bdev_fio 00:18:29.141 ************************************ 00:18:29.141 07:54:51 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:29.141 07:54:51 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:29.141 07:54:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:29.141 07:54:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.141 07:54:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:29.141 ************************************ 00:18:29.141 START TEST bdev_verify 00:18:29.141 ************************************ 00:18:29.141 07:54:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:29.141 [2024-11-06 07:54:51.687122] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:29.141 [2024-11-06 07:54:51.687404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72186 ] 00:18:29.399 [2024-11-06 07:54:51.886959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:29.658 [2024-11-06 07:54:52.069422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.658 [2024-11-06 07:54:52.069437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.224 Running I/O for 5 seconds... 00:18:32.538 20448.00 IOPS, 79.88 MiB/s [2024-11-06T07:54:56.103Z] 21200.00 IOPS, 82.81 MiB/s [2024-11-06T07:54:57.040Z] 21642.67 IOPS, 84.54 MiB/s [2024-11-06T07:54:57.983Z] 21840.00 IOPS, 85.31 MiB/s [2024-11-06T07:54:57.983Z] 21683.20 IOPS, 84.70 MiB/s 00:18:35.354 Latency(us) 00:18:35.354 [2024-11-06T07:54:57.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.354 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x0 length 0xa0000 00:18:35.354 nvme0n1 : 5.01 1582.82 6.18 0.00 0.00 80715.60 15192.44 73876.95 00:18:35.354 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0xa0000 length 0xa0000 00:18:35.354 nvme0n1 : 5.07 1516.12 5.92 0.00 0.00 84273.63 13226.36 79596.45 00:18:35.354 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x0 length 0xbd0bd 00:18:35.354 nvme1n1 : 5.06 2977.83 11.63 0.00 0.00 42787.45 5481.19 62914.56 00:18:35.354 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:18:35.354 nvme1n1 : 5.07 2838.40 11.09 0.00 0.00 44827.42 5183.30 67680.81 00:18:35.354 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x0 length 0x80000 00:18:35.354 nvme2n1 : 5.07 1589.73 6.21 0.00 0.00 80033.52 7506.85 67204.19 00:18:35.354 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x80000 length 0x80000 00:18:35.354 nvme2n1 : 5.08 1537.44 6.01 0.00 0.00 82580.17 8877.15 73876.95 00:18:35.354 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x0 length 0x80000 00:18:35.354 nvme2n2 : 5.07 1589.14 6.21 0.00 0.00 79933.65 8340.95 61961.31 00:18:35.354 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x80000 length 0x80000 00:18:35.354 nvme2n2 : 5.08 1536.85 6.00 0.00 0.00 82431.19 9830.40 68634.07 00:18:35.354 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x0 length 0x80000 00:18:35.354 nvme2n3 : 5.08 1588.58 6.21 0.00 0.00 79820.39 9234.62 69587.32 00:18:35.354 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x80000 length 0x80000 00:18:35.354 nvme2n3 : 5.09 1535.37 6.00 0.00 0.00 82368.52 9413.35 62437.93 00:18:35.354 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x0 length 0x20000 00:18:35.354 nvme3n1 : 5.08 1588.06 6.20 0.00 0.00 79702.10 6374.87 73400.32 00:18:35.354 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.354 Verification LBA range: start 0x20000 length 0x20000 00:18:35.354 nvme3n1 : 5.09 1534.87 6.00 0.00 0.00 82324.11 10128.29 69587.32 00:18:35.354 [2024-11-06T07:54:57.983Z] =================================================================================================================== 00:18:35.354 [2024-11-06T07:54:57.983Z] Total : 21415.21 83.65 0.00 0.00 71184.06 5183.30 79596.45 00:18:36.731 00:18:36.731 real 0m7.365s 00:18:36.731 user 0m11.510s 00:18:36.731 sys 0m1.915s 00:18:36.731 07:54:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.731 07:54:58 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:36.731 ************************************ 00:18:36.731 END TEST bdev_verify 00:18:36.731 ************************************ 00:18:36.731 07:54:58 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:36.731 07:54:58 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:18:36.731 07:54:58 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.731 07:54:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:36.731 ************************************ 00:18:36.731 START TEST bdev_verify_big_io 00:18:36.731 ************************************ 00:18:36.731 07:54:58 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:36.731 [2024-11-06 07:54:59.097268] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:36.731 [2024-11-06 07:54:59.097508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72288 ] 00:18:36.731 [2024-11-06 07:54:59.275807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:36.990 [2024-11-06 07:54:59.421740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.990 [2024-11-06 07:54:59.421745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.564 Running I/O for 5 seconds... 00:18:43.657 2304.00 IOPS, 144.00 MiB/s [2024-11-06T07:55:06.286Z] 4167.50 IOPS, 260.47 MiB/s 00:18:43.657 Latency(us) 00:18:43.657 [2024-11-06T07:55:06.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.657 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x0 length 0xa000 00:18:43.657 nvme0n1 : 5.88 130.63 8.16 0.00 0.00 961406.60 116296.61 892242.85 00:18:43.657 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0xa000 length 0xa000 00:18:43.657 nvme0n1 : 5.95 126.45 7.90 0.00 0.00 987698.19 19541.64 1586209.51 00:18:43.657 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x0 length 0xbd0b 00:18:43.657 nvme1n1 : 5.88 160.65 10.04 0.00 0.00 754631.51 42896.29 876990.84 00:18:43.657 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:43.657 nvme1n1 : 5.96 116.79 7.30 0.00 0.00 1021675.42 166818.91 2013265.92 00:18:43.657 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x0 length 0x8000 00:18:43.657 nvme2n1 : 5.84 145.11 9.07 0.00 0.00 817478.42 116773.24 949437.91 00:18:43.657 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x8000 length 0x8000 00:18:43.657 nvme2n1 : 5.91 105.59 6.60 0.00 0.00 1095771.09 187790.43 1700599.62 00:18:43.657 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x0 length 0x8000 00:18:43.657 nvme2n2 : 5.86 100.95 6.31 0.00 0.00 1147543.01 121062.87 2760614.63 00:18:43.657 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x8000 length 0x8000 00:18:43.657 nvme2n2 : 5.95 122.35 7.65 0.00 0.00 927772.40 82932.83 1738729.66 00:18:43.657 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x0 length 0x8000 00:18:43.657 nvme2n3 : 5.87 115.75 7.23 0.00 0.00 971738.02 91988.71 999006.95 00:18:43.657 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x8000 length 0x8000 00:18:43.657 nvme2n3 : 5.92 140.64 8.79 0.00 0.00 781290.80 42419.67 983754.94 00:18:43.657 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x0 length 0x2000 00:18:43.657 nvme3n1 : 5.87 170.42 10.65 0.00 0.00 648420.15 12749.73 766413.73 00:18:43.657 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:43.657 Verification LBA range: start 0x2000 length 0x2000 00:18:43.657 nvme3n1 : 5.97 155.49 9.72 0.00 0.00 687989.29 9413.35 1853119.77 00:18:43.658 [2024-11-06T07:55:06.287Z] =================================================================================================================== 00:18:43.658 [2024-11-06T07:55:06.287Z] Total : 1590.82 99.43 0.00 0.00 876704.02 9413.35 2760614.63 00:18:45.033 00:18:45.033 real 0m8.501s 00:18:45.033 user 0m15.263s 00:18:45.033 sys 0m0.703s 00:18:45.033 07:55:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:45.033 07:55:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.033 ************************************ 00:18:45.033 END TEST bdev_verify_big_io 00:18:45.033 ************************************ 00:18:45.034 07:55:07 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:45.034 07:55:07 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:45.034 07:55:07 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:45.034 07:55:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:45.034 ************************************ 00:18:45.034 START TEST bdev_write_zeroes 00:18:45.034 ************************************ 00:18:45.034 07:55:07 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:45.034 [2024-11-06 07:55:07.658998] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:45.034 [2024-11-06 07:55:07.659304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72405 ] 00:18:45.293 [2024-11-06 07:55:07.841495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.551 [2024-11-06 07:55:07.997420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.118 Running I/O for 1 seconds... 00:18:47.108 62720.00 IOPS, 245.00 MiB/s 00:18:47.108 Latency(us) 00:18:47.108 [2024-11-06T07:55:09.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.108 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.108 nvme0n1 : 1.02 9173.38 35.83 0.00 0.00 13938.47 6732.33 25022.84 00:18:47.108 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.108 nvme1n1 : 1.03 16430.03 64.18 0.00 0.00 7743.27 4259.84 18230.92 00:18:47.108 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.108 nvme2n1 : 1.02 9129.41 35.66 0.00 0.00 13926.81 5540.77 25261.15 00:18:47.108 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.108 nvme2n2 : 1.02 9117.71 35.62 0.00 0.00 13932.48 5391.83 25499.46 00:18:47.108 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.108 nvme2n3 : 1.03 9106.23 35.57 0.00 0.00 13938.03 5510.98 25856.93 00:18:47.108 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:47.108 nvme3n1 : 1.03 9095.26 35.53 0.00 0.00 13938.70 5540.77 26214.40 00:18:47.108 [2024-11-06T07:55:09.737Z] =================================================================================================================== 00:18:47.108 [2024-11-06T07:55:09.737Z] Total : 62052.01 242.39 0.00 0.00 12290.44 4259.84 26214.40 00:18:48.486 00:18:48.486 real 0m3.133s 00:18:48.486 user 0m2.301s 00:18:48.486 sys 0m0.657s 00:18:48.486 07:55:10 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:48.486 07:55:10 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:48.486 ************************************ 00:18:48.486 END TEST bdev_write_zeroes 00:18:48.486 ************************************ 00:18:48.486 07:55:10 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.486 07:55:10 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:48.486 07:55:10 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:48.486 07:55:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:48.486 ************************************ 00:18:48.486 START TEST bdev_json_nonenclosed 00:18:48.486 ************************************ 00:18:48.486 07:55:10 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:48.486 [2024-11-06 07:55:10.865591] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:48.486 [2024-11-06 07:55:10.865843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72465 ] 00:18:48.486 [2024-11-06 07:55:11.049173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.745 [2024-11-06 07:55:11.184413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.745 [2024-11-06 07:55:11.184552] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:48.745 [2024-11-06 07:55:11.184583] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:48.745 [2024-11-06 07:55:11.184598] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:49.004 00:18:49.004 real 0m0.702s 00:18:49.004 user 0m0.434s 00:18:49.004 sys 0m0.162s 00:18:49.004 07:55:11 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.004 07:55:11 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:49.004 ************************************ 00:18:49.004 END TEST bdev_json_nonenclosed 00:18:49.004 ************************************ 00:18:49.004 07:55:11 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:49.004 07:55:11 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:49.004 07:55:11 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.004 07:55:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.004 ************************************ 00:18:49.004 START TEST bdev_json_nonarray 00:18:49.004 ************************************ 00:18:49.004 07:55:11 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:49.004 [2024-11-06 07:55:11.629619] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:49.004 [2024-11-06 07:55:11.629862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72490 ] 00:18:49.262 [2024-11-06 07:55:11.826126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.521 [2024-11-06 07:55:11.961501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.521 [2024-11-06 07:55:11.961633] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:49.521 [2024-11-06 07:55:11.961666] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:49.521 [2024-11-06 07:55:11.961687] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:49.780 00:18:49.780 real 0m0.744s 00:18:49.780 user 0m0.462s 00:18:49.780 sys 0m0.173s 00:18:49.780 07:55:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.780 07:55:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:49.780 ************************************ 00:18:49.780 END TEST bdev_json_nonarray 00:18:49.780 ************************************ 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:49.780 07:55:12 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:50.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:51.309 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:51.309 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:51.309 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:51.309 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:51.309 00:18:51.309 real 1m5.057s 00:18:51.309 user 1m46.161s 00:18:51.309 sys 0m29.338s 00:18:51.309 ************************************ 00:18:51.309 END TEST blockdev_xnvme 00:18:51.309 ************************************ 00:18:51.309 07:55:13 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.309 07:55:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:51.309 07:55:13 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:51.309 07:55:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:51.309 07:55:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.309 07:55:13 -- common/autotest_common.sh@10 -- # set +x 00:18:51.309 ************************************ 00:18:51.309 START TEST ublk 00:18:51.309 ************************************ 00:18:51.309 07:55:13 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:51.568 * Looking for test storage... 00:18:51.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:51.568 07:55:13 ublk -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:51.568 07:55:13 ublk -- common/autotest_common.sh@1689 -- # lcov --version 00:18:51.568 07:55:13 ublk -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:51.568 07:55:14 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.568 07:55:14 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.568 07:55:14 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.568 07:55:14 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.568 07:55:14 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.568 07:55:14 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.568 07:55:14 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.568 07:55:14 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.568 07:55:14 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.568 07:55:14 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.568 07:55:14 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.568 07:55:14 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:51.568 07:55:14 ublk -- scripts/common.sh@345 -- # : 1 00:18:51.568 07:55:14 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.568 07:55:14 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.568 07:55:14 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:51.568 07:55:14 ublk -- scripts/common.sh@353 -- # local d=1 00:18:51.568 07:55:14 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.568 07:55:14 ublk -- scripts/common.sh@355 -- # echo 1 00:18:51.568 07:55:14 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.568 07:55:14 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:51.568 07:55:14 ublk -- scripts/common.sh@353 -- # local d=2 00:18:51.568 07:55:14 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.568 07:55:14 ublk -- scripts/common.sh@355 -- # echo 2 00:18:51.568 07:55:14 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.568 07:55:14 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.568 07:55:14 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.568 07:55:14 ublk -- scripts/common.sh@368 -- # return 0 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:51.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.568 --rc genhtml_branch_coverage=1 00:18:51.568 --rc genhtml_function_coverage=1 00:18:51.568 --rc genhtml_legend=1 00:18:51.568 --rc geninfo_all_blocks=1 00:18:51.568 --rc geninfo_unexecuted_blocks=1 00:18:51.568 00:18:51.568 ' 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:51.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.568 --rc genhtml_branch_coverage=1 00:18:51.568 --rc genhtml_function_coverage=1 00:18:51.568 --rc genhtml_legend=1 00:18:51.568 --rc geninfo_all_blocks=1 00:18:51.568 --rc geninfo_unexecuted_blocks=1 00:18:51.568 00:18:51.568 ' 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:51.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.568 --rc genhtml_branch_coverage=1 00:18:51.568 --rc genhtml_function_coverage=1 00:18:51.568 --rc genhtml_legend=1 00:18:51.568 --rc geninfo_all_blocks=1 00:18:51.568 --rc geninfo_unexecuted_blocks=1 00:18:51.568 00:18:51.568 ' 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:51.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.568 --rc genhtml_branch_coverage=1 00:18:51.568 --rc genhtml_function_coverage=1 00:18:51.568 --rc genhtml_legend=1 00:18:51.568 --rc geninfo_all_blocks=1 00:18:51.568 --rc geninfo_unexecuted_blocks=1 00:18:51.568 00:18:51.568 ' 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:51.568 07:55:14 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:51.568 07:55:14 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:51.568 07:55:14 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:51.568 07:55:14 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:51.568 07:55:14 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:51.568 07:55:14 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:51.568 07:55:14 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:51.568 07:55:14 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:51.568 07:55:14 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.568 07:55:14 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:51.568 ************************************ 00:18:51.568 START TEST test_save_ublk_config 00:18:51.568 ************************************ 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72781 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72781 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72781 ']' 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.568 07:55:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:51.827 [2024-11-06 07:55:14.234745] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:51.827 [2024-11-06 07:55:14.234938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72781 ] 00:18:51.827 [2024-11-06 07:55:14.425723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.086 [2024-11-06 07:55:14.600651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.021 07:55:15 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.021 07:55:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:18:53.021 07:55:15 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:53.021 07:55:15 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:53.021 07:55:15 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.021 07:55:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:53.021 [2024-11-06 07:55:15.614293] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:53.021 [2024-11-06 07:55:15.615580] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:53.280 malloc0 00:18:53.280 [2024-11-06 07:55:15.708503] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:53.280 [2024-11-06 07:55:15.708688] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:53.280 [2024-11-06 07:55:15.708713] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:53.280 [2024-11-06 07:55:15.708727] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:53.280 [2024-11-06 07:55:15.717477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:53.280 [2024-11-06 07:55:15.717523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:53.280 [2024-11-06 07:55:15.724335] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:53.280 [2024-11-06 07:55:15.724533] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:53.280 [2024-11-06 07:55:15.744345] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:53.280 0 00:18:53.280 07:55:15 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.280 07:55:15 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:53.280 07:55:15 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.280 07:55:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:53.539 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.539 07:55:16 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:53.539 "subsystems": [ 00:18:53.539 { 00:18:53.539 "subsystem": "fsdev", 00:18:53.539 "config": [ 00:18:53.539 { 00:18:53.539 "method": "fsdev_set_opts", 00:18:53.539 "params": { 00:18:53.539 "fsdev_io_pool_size": 65535, 00:18:53.539 "fsdev_io_cache_size": 256 00:18:53.539 } 00:18:53.539 } 00:18:53.539 ] 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "subsystem": "keyring", 00:18:53.539 "config": [] 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "subsystem": "iobuf", 00:18:53.539 "config": [ 00:18:53.539 { 00:18:53.539 "method": "iobuf_set_options", 00:18:53.539 "params": { 00:18:53.539 "small_pool_count": 8192, 00:18:53.539 "large_pool_count": 1024, 00:18:53.539 "small_bufsize": 8192, 00:18:53.539 "large_bufsize": 135168, 00:18:53.539 "enable_numa": false 00:18:53.539 } 00:18:53.539 } 00:18:53.539 ] 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "subsystem": "sock", 00:18:53.539 "config": [ 00:18:53.539 { 00:18:53.539 "method": "sock_set_default_impl", 00:18:53.539 "params": { 00:18:53.539 "impl_name": "posix" 00:18:53.539 } 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "method": "sock_impl_set_options", 00:18:53.539 "params": { 00:18:53.539 "impl_name": "ssl", 00:18:53.539 "recv_buf_size": 4096, 00:18:53.539 "send_buf_size": 4096, 00:18:53.539 "enable_recv_pipe": true, 00:18:53.539 "enable_quickack": false, 00:18:53.539 "enable_placement_id": 0, 00:18:53.539 "enable_zerocopy_send_server": true, 00:18:53.539 "enable_zerocopy_send_client": false, 00:18:53.539 "zerocopy_threshold": 0, 00:18:53.539 "tls_version": 0, 00:18:53.539 "enable_ktls": false 00:18:53.539 } 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "method": "sock_impl_set_options", 00:18:53.539 "params": { 00:18:53.539 "impl_name": "posix", 00:18:53.539 "recv_buf_size": 2097152, 00:18:53.539 "send_buf_size": 2097152, 00:18:53.539 "enable_recv_pipe": true, 00:18:53.539 "enable_quickack": false, 00:18:53.539 "enable_placement_id": 0, 00:18:53.539 "enable_zerocopy_send_server": true, 00:18:53.539 "enable_zerocopy_send_client": false, 00:18:53.539 "zerocopy_threshold": 0, 00:18:53.539 "tls_version": 0, 00:18:53.539 "enable_ktls": false 00:18:53.539 } 00:18:53.539 } 00:18:53.539 ] 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "subsystem": "vmd", 00:18:53.539 "config": [] 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "subsystem": "accel", 00:18:53.539 "config": [ 00:18:53.539 { 00:18:53.539 "method": "accel_set_options", 00:18:53.539 "params": { 00:18:53.539 "small_cache_size": 128, 00:18:53.539 "large_cache_size": 16, 00:18:53.539 "task_count": 2048, 00:18:53.539 "sequence_count": 2048, 00:18:53.539 "buf_count": 2048 00:18:53.539 } 00:18:53.539 } 00:18:53.539 ] 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "subsystem": "bdev", 00:18:53.539 "config": [ 00:18:53.539 { 00:18:53.539 "method": "bdev_set_options", 00:18:53.539 "params": { 00:18:53.539 "bdev_io_pool_size": 65535, 00:18:53.539 "bdev_io_cache_size": 256, 00:18:53.539 "bdev_auto_examine": true, 00:18:53.539 "iobuf_small_cache_size": 128, 00:18:53.539 "iobuf_large_cache_size": 16 00:18:53.539 } 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "method": "bdev_raid_set_options", 00:18:53.539 "params": { 00:18:53.539 "process_window_size_kb": 1024, 00:18:53.539 "process_max_bandwidth_mb_sec": 0 00:18:53.539 } 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "method": "bdev_iscsi_set_options", 00:18:53.539 "params": { 00:18:53.539 "timeout_sec": 30 00:18:53.539 } 00:18:53.539 }, 00:18:53.539 { 00:18:53.539 "method": "bdev_nvme_set_options", 00:18:53.539 "params": { 00:18:53.539 "action_on_timeout": "none", 00:18:53.539 "timeout_us": 0, 00:18:53.539 "timeout_admin_us": 0, 00:18:53.539 "keep_alive_timeout_ms": 10000, 00:18:53.539 "arbitration_burst": 0, 00:18:53.539 "low_priority_weight": 0, 00:18:53.539 "medium_priority_weight": 0, 00:18:53.539 "high_priority_weight": 0, 00:18:53.539 "nvme_adminq_poll_period_us": 10000, 00:18:53.539 "nvme_ioq_poll_period_us": 0, 00:18:53.539 "io_queue_requests": 0, 00:18:53.539 "delay_cmd_submit": true, 00:18:53.539 "transport_retry_count": 4, 00:18:53.539 "bdev_retry_count": 3, 00:18:53.539 "transport_ack_timeout": 0, 00:18:53.539 "ctrlr_loss_timeout_sec": 0, 00:18:53.539 "reconnect_delay_sec": 0, 00:18:53.540 "fast_io_fail_timeout_sec": 0, 00:18:53.540 "disable_auto_failback": false, 00:18:53.540 "generate_uuids": false, 00:18:53.540 "transport_tos": 0, 00:18:53.540 "nvme_error_stat": false, 00:18:53.540 "rdma_srq_size": 0, 00:18:53.540 "io_path_stat": false, 00:18:53.540 "allow_accel_sequence": false, 00:18:53.540 "rdma_max_cq_size": 0, 00:18:53.540 "rdma_cm_event_timeout_ms": 0, 00:18:53.540 "dhchap_digests": [ 00:18:53.540 "sha256", 00:18:53.540 "sha384", 00:18:53.540 "sha512" 00:18:53.540 ], 00:18:53.540 "dhchap_dhgroups": [ 00:18:53.540 "null", 00:18:53.540 "ffdhe2048", 00:18:53.540 "ffdhe3072", 00:18:53.540 "ffdhe4096", 00:18:53.540 "ffdhe6144", 00:18:53.540 "ffdhe8192" 00:18:53.540 ] 00:18:53.540 } 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "method": "bdev_nvme_set_hotplug", 00:18:53.540 "params": { 00:18:53.540 "period_us": 100000, 00:18:53.540 "enable": false 00:18:53.540 } 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "method": "bdev_malloc_create", 00:18:53.540 "params": { 00:18:53.540 "name": "malloc0", 00:18:53.540 "num_blocks": 8192, 00:18:53.540 "block_size": 4096, 00:18:53.540 "physical_block_size": 4096, 00:18:53.540 "uuid": "e302417c-9d4c-4723-92f2-8b4ddf54b5fa", 00:18:53.540 "optimal_io_boundary": 0, 00:18:53.540 "md_size": 0, 00:18:53.540 "dif_type": 0, 00:18:53.540 "dif_is_head_of_md": false, 00:18:53.540 "dif_pi_format": 0 00:18:53.540 } 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "method": "bdev_wait_for_examine" 00:18:53.540 } 00:18:53.540 ] 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "subsystem": "scsi", 00:18:53.540 "config": null 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "subsystem": "scheduler", 00:18:53.540 "config": [ 00:18:53.540 { 00:18:53.540 "method": "framework_set_scheduler", 00:18:53.540 "params": { 00:18:53.540 "name": "static" 00:18:53.540 } 00:18:53.540 } 00:18:53.540 ] 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "subsystem": "vhost_scsi", 00:18:53.540 "config": [] 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "subsystem": "vhost_blk", 00:18:53.540 "config": [] 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "subsystem": "ublk", 00:18:53.540 "config": [ 00:18:53.540 { 00:18:53.540 "method": "ublk_create_target", 00:18:53.540 "params": { 00:18:53.540 "cpumask": "1" 00:18:53.540 } 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "method": "ublk_start_disk", 00:18:53.540 "params": { 00:18:53.540 "bdev_name": "malloc0", 00:18:53.540 "ublk_id": 0, 00:18:53.540 "num_queues": 1, 00:18:53.540 "queue_depth": 128 00:18:53.540 } 00:18:53.540 } 00:18:53.540 ] 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "subsystem": "nbd", 00:18:53.540 "config": [] 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "subsystem": "nvmf", 00:18:53.540 "config": [ 00:18:53.540 { 00:18:53.540 "method": "nvmf_set_config", 00:18:53.540 "params": { 00:18:53.540 "discovery_filter": "match_any", 00:18:53.540 "admin_cmd_passthru": { 00:18:53.540 "identify_ctrlr": false 00:18:53.540 }, 00:18:53.540 "dhchap_digests": [ 00:18:53.540 "sha256", 00:18:53.540 "sha384", 00:18:53.540 "sha512" 00:18:53.540 ], 00:18:53.540 "dhchap_dhgroups": [ 00:18:53.540 "null", 00:18:53.540 "ffdhe2048", 00:18:53.540 "ffdhe3072", 00:18:53.540 "ffdhe4096", 00:18:53.540 "ffdhe6144", 00:18:53.540 "ffdhe8192" 00:18:53.540 ] 00:18:53.540 } 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "method": "nvmf_set_max_subsystems", 00:18:53.540 "params": { 00:18:53.540 "max_subsystems": 1024 00:18:53.540 } 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "method": "nvmf_set_crdt", 00:18:53.540 "params": { 00:18:53.540 "crdt1": 0, 00:18:53.540 "crdt2": 0, 00:18:53.540 "crdt3": 0 00:18:53.540 } 00:18:53.540 } 00:18:53.540 ] 00:18:53.540 }, 00:18:53.540 { 00:18:53.540 "subsystem": "iscsi", 00:18:53.540 "config": [ 00:18:53.540 { 00:18:53.540 "method": "iscsi_set_options", 00:18:53.540 "params": { 00:18:53.540 "node_base": "iqn.2016-06.io.spdk", 00:18:53.540 "max_sessions": 128, 00:18:53.540 "max_connections_per_session": 2, 00:18:53.540 "max_queue_depth": 64, 00:18:53.540 "default_time2wait": 2, 00:18:53.540 "default_time2retain": 20, 00:18:53.540 "first_burst_length": 8192, 00:18:53.540 "immediate_data": true, 00:18:53.540 "allow_duplicated_isid": false, 00:18:53.540 "error_recovery_level": 0, 00:18:53.540 "nop_timeout": 60, 00:18:53.540 "nop_in_interval": 30, 00:18:53.540 "disable_chap": false, 00:18:53.540 "require_chap": false, 00:18:53.540 "mutual_chap": false, 00:18:53.540 "chap_group": 0, 00:18:53.540 "max_large_datain_per_connection": 64, 00:18:53.540 "max_r2t_per_connection": 4, 00:18:53.540 "pdu_pool_size": 36864, 00:18:53.540 "immediate_data_pool_size": 16384, 00:18:53.540 "data_out_pool_size": 2048 00:18:53.540 } 00:18:53.540 } 00:18:53.540 ] 00:18:53.540 } 00:18:53.540 ] 00:18:53.540 }' 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72781 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72781 ']' 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72781 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72781 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:53.540 killing process with pid 72781 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72781' 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72781 00:18:53.540 07:55:16 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72781 00:18:55.463 [2024-11-06 07:55:17.566643] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:55.463 [2024-11-06 07:55:17.612314] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:55.463 [2024-11-06 07:55:17.612527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:55.463 [2024-11-06 07:55:17.614531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:55.463 [2024-11-06 07:55:17.614596] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:55.463 [2024-11-06 07:55:17.614618] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:55.463 [2024-11-06 07:55:17.614654] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:55.463 [2024-11-06 07:55:17.614845] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:56.837 07:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72847 00:18:56.837 07:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72847 00:18:56.837 07:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72847 ']' 00:18:56.837 07:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.837 07:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.837 07:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:56.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.837 07:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.837 07:55:19 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:56.837 "subsystems": [ 00:18:56.837 { 00:18:56.837 "subsystem": "fsdev", 00:18:56.837 "config": [ 00:18:56.837 { 00:18:56.837 "method": "fsdev_set_opts", 00:18:56.837 "params": { 00:18:56.837 "fsdev_io_pool_size": 65535, 00:18:56.837 "fsdev_io_cache_size": 256 00:18:56.837 } 00:18:56.837 } 00:18:56.837 ] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "keyring", 00:18:56.837 "config": [] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "iobuf", 00:18:56.837 "config": [ 00:18:56.837 { 00:18:56.837 "method": "iobuf_set_options", 00:18:56.837 "params": { 00:18:56.837 "small_pool_count": 8192, 00:18:56.837 "large_pool_count": 1024, 00:18:56.837 "small_bufsize": 8192, 00:18:56.837 "large_bufsize": 135168, 00:18:56.837 "enable_numa": false 00:18:56.837 } 00:18:56.837 } 00:18:56.837 ] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "sock", 00:18:56.837 "config": [ 00:18:56.837 { 00:18:56.837 "method": "sock_set_default_impl", 00:18:56.837 "params": { 00:18:56.837 "impl_name": "posix" 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "sock_impl_set_options", 00:18:56.837 "params": { 00:18:56.837 "impl_name": "ssl", 00:18:56.837 "recv_buf_size": 4096, 00:18:56.837 "send_buf_size": 4096, 00:18:56.837 "enable_recv_pipe": true, 00:18:56.837 "enable_quickack": false, 00:18:56.837 "enable_placement_id": 0, 00:18:56.837 "enable_zerocopy_send_server": true, 00:18:56.837 "enable_zerocopy_send_client": false, 00:18:56.837 "zerocopy_threshold": 0, 00:18:56.837 "tls_version": 0, 00:18:56.837 "enable_ktls": false 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "sock_impl_set_options", 00:18:56.837 "params": { 00:18:56.837 "impl_name": "posix", 00:18:56.837 "recv_buf_size": 2097152, 00:18:56.837 "send_buf_size": 2097152, 00:18:56.837 "enable_recv_pipe": true, 00:18:56.837 "enable_quickack": false, 00:18:56.837 "enable_placement_id": 0, 00:18:56.837 "enable_zerocopy_send_server": true, 00:18:56.837 "enable_zerocopy_send_client": false, 00:18:56.837 "zerocopy_threshold": 0, 00:18:56.837 "tls_version": 0, 00:18:56.837 "enable_ktls": false 00:18:56.837 } 00:18:56.837 } 00:18:56.837 ] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "vmd", 00:18:56.837 "config": [] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "accel", 00:18:56.837 "config": [ 00:18:56.837 { 00:18:56.837 "method": "accel_set_options", 00:18:56.837 "params": { 00:18:56.837 "small_cache_size": 128, 00:18:56.837 "large_cache_size": 16, 00:18:56.837 "task_count": 2048, 00:18:56.837 "sequence_count": 2048, 00:18:56.837 "buf_count": 2048 00:18:56.837 } 00:18:56.837 } 00:18:56.837 ] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "bdev", 00:18:56.837 "config": [ 00:18:56.837 { 00:18:56.837 "method": "bdev_set_options", 00:18:56.837 "params": { 00:18:56.837 "bdev_io_pool_size": 65535, 00:18:56.837 "bdev_io_cache_size": 256, 00:18:56.837 "bdev_auto_examine": true, 00:18:56.837 "iobuf_small_cache_size": 128, 00:18:56.837 "iobuf_large_cache_size": 16 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "bdev_raid_set_options", 00:18:56.837 "params": { 00:18:56.837 "process_window_size_kb": 1024, 00:18:56.837 "process_max_bandwidth_mb_sec": 0 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "bdev_iscsi_set_options", 00:18:56.837 "params": { 00:18:56.837 "timeout_sec": 30 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "bdev_nvme_set_options", 00:18:56.837 "params": { 00:18:56.837 "action_on_timeout": "none", 00:18:56.837 "timeout_us": 0, 00:18:56.837 "timeout_admin_us": 0, 00:18:56.837 "keep_alive_timeout_ms": 10000, 00:18:56.837 "arbitration_burst": 0, 00:18:56.837 "low_priority_weight": 0, 00:18:56.837 "medium_priority_weight": 0, 00:18:56.837 "high_priority_weight": 0, 00:18:56.837 "nvme_adminq_poll_period_us": 10000, 00:18:56.837 "nvme_ioq_poll_period_us": 0, 00:18:56.837 "io_queue_requests": 0, 00:18:56.837 "delay_cmd_submit": true, 00:18:56.837 "transport_retry_count": 4, 00:18:56.837 "bdev_retry_count": 3, 00:18:56.837 "transport_ack_timeout": 0, 00:18:56.837 "ctrlr_loss_timeout_sec": 0, 00:18:56.837 "reconnect_delay_sec": 0, 00:18:56.837 "fast_io_fail_timeout_sec": 0, 00:18:56.837 "disable_auto_failback": false, 00:18:56.837 "generate_uuids": false, 00:18:56.837 "transport_tos": 0, 00:18:56.837 "nvme_error_stat": false, 00:18:56.837 "rdma_srq_size": 0, 00:18:56.837 "io_path_stat": false, 00:18:56.837 "allow_accel_sequence": false, 00:18:56.837 "rdma_max_cq_size": 0, 00:18:56.837 "rdma_cm_event_timeout_ms": 0, 00:18:56.837 "dhchap_digests": [ 00:18:56.837 "sha256", 00:18:56.837 "sha384", 00:18:56.837 "sha512" 00:18:56.837 ], 00:18:56.837 "dhchap_dhgroups": [ 00:18:56.837 "null", 00:18:56.837 "ffdhe2048", 00:18:56.837 "ffdhe3072", 00:18:56.837 "ffdhe4096", 00:18:56.837 "ffdhe6144", 00:18:56.837 "ffdhe8192" 00:18:56.837 ] 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "bdev_nvme_set_hotplug", 00:18:56.837 "params": { 00:18:56.837 "period_us": 100000, 00:18:56.837 "enable": false 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "bdev_malloc_create", 00:18:56.837 "params": { 00:18:56.837 "name": "malloc0", 00:18:56.837 "num_blocks": 8192, 00:18:56.837 "block_size": 4096, 00:18:56.837 "physical_block_size": 4096, 00:18:56.837 "uuid": "e302417c-9d4c-4723-92f2-8b4ddf54b5fa", 00:18:56.837 "optimal_io_boundary": 0, 00:18:56.837 "md_size": 0, 00:18:56.837 "dif_type": 0, 00:18:56.837 "dif_is_head_of_md": false, 00:18:56.837 "dif_pi_format": 0 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "bdev_wait_for_examine" 00:18:56.837 } 00:18:56.837 ] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "scsi", 00:18:56.837 "config": null 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "scheduler", 00:18:56.837 "config": [ 00:18:56.837 { 00:18:56.837 "method": "framework_set_scheduler", 00:18:56.837 "params": { 00:18:56.837 "name": "static" 00:18:56.837 } 00:18:56.837 } 00:18:56.837 ] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "vhost_scsi", 00:18:56.837 "config": [] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "vhost_blk", 00:18:56.837 "config": [] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "ublk", 00:18:56.837 "config": [ 00:18:56.837 { 00:18:56.837 "method": "ublk_create_target", 00:18:56.837 "params": { 00:18:56.837 "cpumask": "1" 00:18:56.837 } 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "method": "ublk_start_disk", 00:18:56.837 "params": { 00:18:56.837 "bdev_name": "malloc0", 00:18:56.837 "ublk_id": 0, 00:18:56.837 "num_queues": 1, 00:18:56.837 "queue_depth": 128 00:18:56.837 } 00:18:56.837 } 00:18:56.837 ] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "nbd", 00:18:56.837 "config": [] 00:18:56.837 }, 00:18:56.837 { 00:18:56.837 "subsystem": "nvmf", 00:18:56.837 "config": [ 00:18:56.837 { 00:18:56.837 "method": "nvmf_set_config", 00:18:56.837 "params": { 00:18:56.837 "discovery_filter": "match_any", 00:18:56.837 "admin_cmd_passthru": { 00:18:56.837 "identify_ctrlr": false 00:18:56.837 }, 00:18:56.837 "dhchap_digests": [ 00:18:56.837 "sha256", 00:18:56.838 "sha384", 00:18:56.838 "sha512" 00:18:56.838 ], 00:18:56.838 "dhchap_dhgroups": [ 00:18:56.838 "null", 00:18:56.838 "ffdhe2048", 00:18:56.838 "ffdhe3072", 00:18:56.838 "ffdhe4096", 00:18:56.838 "ffdhe6144", 00:18:56.838 "ffdhe81 07:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.838 92" 00:18:56.838 ] 00:18:56.838 } 00:18:56.838 }, 00:18:56.838 { 00:18:56.838 "method": "nvmf_set_max_subsystems", 00:18:56.838 "params": { 00:18:56.838 "max_subsystems": 1024 00:18:56.838 } 00:18:56.838 }, 00:18:56.838 { 00:18:56.838 "method": "nvmf_set_crdt", 00:18:56.838 "params": { 00:18:56.838 "crdt1": 0, 00:18:56.838 "crdt2": 0, 00:18:56.838 "crdt3": 0 00:18:56.838 } 00:18:56.838 } 00:18:56.838 ] 00:18:56.838 }, 00:18:56.838 { 00:18:56.838 "subsystem": "iscsi", 00:18:56.838 "config": [ 00:18:56.838 { 00:18:56.838 "method": "iscsi_set_options", 00:18:56.838 "params": { 00:18:56.838 "node_base": "iqn.2016-06.io.spdk", 00:18:56.838 "max_sessions": 128, 00:18:56.838 "max_connections_per_session": 2, 00:18:56.838 "max_queue_depth": 64, 00:18:56.838 "default_time2wait": 2, 00:18:56.838 "default_time2retain": 20, 00:18:56.838 "first_burst_length": 8192, 00:18:56.838 "immediate_data": true, 00:18:56.838 "allow_duplicated_isid": false, 00:18:56.838 "error_recovery_level": 0, 00:18:56.838 "nop_timeout": 60, 00:18:56.838 "nop_in_interval": 30, 00:18:56.838 "disable_chap": false, 00:18:56.838 "require_chap": false, 00:18:56.838 "mutual_chap": false, 00:18:56.838 "chap_group": 0, 00:18:56.838 "max_large_datain_per_connection": 64, 00:18:56.838 "max_r2t_per_connection": 4, 00:18:56.838 "pdu_pool_size": 36864, 00:18:56.838 "immediate_data_pool_size": 16384, 00:18:56.838 "data_out_pool_size": 2048 00:18:56.838 } 00:18:56.838 } 00:18:56.838 ] 00:18:56.838 } 00:18:56.838 ] 00:18:56.838 }' 00:18:56.838 07:55:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:57.098 [2024-11-06 07:55:19.562837] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:18:57.098 [2024-11-06 07:55:19.563030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72847 ] 00:18:57.357 [2024-11-06 07:55:19.747457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.357 [2024-11-06 07:55:19.882595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.761 [2024-11-06 07:55:20.942275] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:58.761 [2024-11-06 07:55:20.943552] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:58.761 [2024-11-06 07:55:20.950555] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:58.761 [2024-11-06 07:55:20.950693] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:58.761 [2024-11-06 07:55:20.950713] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:58.761 [2024-11-06 07:55:20.950724] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:58.761 [2024-11-06 07:55:20.959377] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:58.761 [2024-11-06 07:55:20.959416] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:58.761 [2024-11-06 07:55:20.966311] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:58.761 [2024-11-06 07:55:20.966478] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:58.761 [2024-11-06 07:55:20.983290] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72847 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72847 ']' 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72847 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72847 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:58.761 killing process with pid 72847 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72847' 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72847 00:18:58.761 07:55:21 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72847 00:19:00.668 [2024-11-06 07:55:22.839077] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:00.668 [2024-11-06 07:55:22.869398] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:00.668 [2024-11-06 07:55:22.869582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:00.668 [2024-11-06 07:55:22.877299] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:00.668 [2024-11-06 07:55:22.877370] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:00.668 [2024-11-06 07:55:22.877384] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:00.668 [2024-11-06 07:55:22.877419] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:00.668 [2024-11-06 07:55:22.877609] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:02.598 07:55:24 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:02.598 00:19:02.598 real 0m10.581s 00:19:02.598 user 0m7.936s 00:19:02.598 sys 0m3.696s 00:19:02.598 07:55:24 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.598 07:55:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:02.598 ************************************ 00:19:02.599 END TEST test_save_ublk_config 00:19:02.599 ************************************ 00:19:02.599 07:55:24 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72943 00:19:02.599 07:55:24 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:02.599 07:55:24 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.599 07:55:24 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72943 00:19:02.599 07:55:24 ublk -- common/autotest_common.sh@831 -- # '[' -z 72943 ']' 00:19:02.599 07:55:24 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.599 07:55:24 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.599 07:55:24 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.599 07:55:24 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.599 07:55:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:02.599 [2024-11-06 07:55:24.870806] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:02.599 [2024-11-06 07:55:24.870971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72943 ] 00:19:02.599 [2024-11-06 07:55:25.048994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:02.599 [2024-11-06 07:55:25.187158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.599 [2024-11-06 07:55:25.187170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.534 07:55:26 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.534 07:55:26 ublk -- common/autotest_common.sh@864 -- # return 0 00:19:03.534 07:55:26 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:03.534 07:55:26 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:03.534 07:55:26 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.534 07:55:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.534 ************************************ 00:19:03.534 START TEST test_create_ublk 00:19:03.534 ************************************ 00:19:03.534 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:19:03.534 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:03.534 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.534 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:03.534 [2024-11-06 07:55:26.158292] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:03.792 [2024-11-06 07:55:26.165645] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:03.792 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.792 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:03.792 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:03.792 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.792 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:04.050 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.050 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:04.050 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:04.050 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.050 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:04.050 [2024-11-06 07:55:26.485519] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:04.050 [2024-11-06 07:55:26.486193] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:04.050 [2024-11-06 07:55:26.486244] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:04.050 [2024-11-06 07:55:26.486278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:04.050 [2024-11-06 07:55:26.493418] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:04.050 [2024-11-06 07:55:26.493453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:04.050 [2024-11-06 07:55:26.501446] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:04.050 [2024-11-06 07:55:26.502366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:04.050 [2024-11-06 07:55:26.524391] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:04.050 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.050 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:04.051 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:04.051 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:04.051 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.051 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:04.051 07:55:26 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.051 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:04.051 { 00:19:04.051 "ublk_device": "/dev/ublkb0", 00:19:04.051 "id": 0, 00:19:04.051 "queue_depth": 512, 00:19:04.051 "num_queues": 4, 00:19:04.051 "bdev_name": "Malloc0" 00:19:04.051 } 00:19:04.051 ]' 00:19:04.051 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:04.051 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:04.051 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:04.051 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:04.051 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:04.309 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:04.309 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:04.309 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:04.309 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:04.309 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:04.309 07:55:26 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:04.309 07:55:26 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:04.309 fio: verification read phase will never start because write phase uses all of runtime 00:19:04.309 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:04.309 fio-3.35 00:19:04.309 Starting 1 process 00:19:16.513 00:19:16.513 fio_test: (groupid=0, jobs=1): err= 0: pid=72997: Wed Nov 6 07:55:37 2024 00:19:16.513 write: IOPS=7359, BW=28.7MiB/s (30.1MB/s)(288MiB/10001msec); 0 zone resets 00:19:16.513 clat (usec): min=58, max=4080, avg=134.17, stdev=142.39 00:19:16.513 lat (usec): min=58, max=4081, avg=135.14, stdev=142.42 00:19:16.513 clat percentiles (usec): 00:19:16.513 | 1.00th=[ 62], 5.00th=[ 104], 10.00th=[ 112], 20.00th=[ 117], 00:19:16.513 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 129], 00:19:16.513 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 155], 00:19:16.513 | 99.00th=[ 180], 99.50th=[ 215], 99.90th=[ 2868], 99.95th=[ 3326], 00:19:16.513 | 99.99th=[ 3818] 00:19:16.513 bw ( KiB/s): min=27080, max=42512, per=100.00%, avg=29565.47, stdev=3335.29, samples=19 00:19:16.513 iops : min= 6770, max=10628, avg=7391.37, stdev=833.82, samples=19 00:19:16.513 lat (usec) : 100=4.65%, 250=94.88%, 500=0.05%, 750=0.02%, 1000=0.03% 00:19:16.513 lat (msec) : 2=0.14%, 4=0.23%, 10=0.01% 00:19:16.513 cpu : usr=2.13%, sys=6.98%, ctx=73604, majf=0, minf=795 00:19:16.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:16.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.513 issued rwts: total=0,73600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:16.513 00:19:16.513 Run status group 0 (all jobs): 00:19:16.513 WRITE: bw=28.7MiB/s (30.1MB/s), 28.7MiB/s-28.7MiB/s (30.1MB/s-30.1MB/s), io=288MiB (301MB), run=10001-10001msec 00:19:16.513 00:19:16.513 Disk stats (read/write): 00:19:16.513 ublkb0: ios=0/72887, merge=0/0, ticks=0/9025, in_queue=9025, util=99.06% 00:19:16.513 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 [2024-11-06 07:55:37.055854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:16.513 [2024-11-06 07:55:37.100330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:16.513 [2024-11-06 07:55:37.101705] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:16.513 [2024-11-06 07:55:37.109396] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:16.513 [2024-11-06 07:55:37.109823] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:16.513 [2024-11-06 07:55:37.109844] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.513 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 [2024-11-06 07:55:37.124417] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:16.513 request: 00:19:16.513 { 00:19:16.513 "ublk_id": 0, 00:19:16.513 "method": "ublk_stop_disk", 00:19:16.513 "req_id": 1 00:19:16.513 } 00:19:16.513 Got JSON-RPC error response 00:19:16.513 response: 00:19:16.513 { 00:19:16.513 "code": -19, 00:19:16.513 "message": "No such device" 00:19:16.513 } 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.513 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 [2024-11-06 07:55:37.140485] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:16.513 [2024-11-06 07:55:37.148275] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:16.513 [2024-11-06 07:55:37.148352] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.513 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.513 07:55:37 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:16.513 07:55:37 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.513 07:55:37 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:16.513 07:55:37 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:16.513 07:55:37 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:16.513 07:55:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.513 07:55:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:16.513 07:55:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:16.513 07:55:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:16.513 00:19:16.513 real 0m11.812s 00:19:16.513 user 0m0.667s 00:19:16.513 sys 0m0.826s 00:19:16.513 ************************************ 00:19:16.513 END TEST test_create_ublk 00:19:16.513 ************************************ 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:16.513 07:55:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 07:55:37 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:16.513 07:55:37 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:16.513 07:55:37 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:16.513 07:55:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 ************************************ 00:19:16.513 START TEST test_create_multi_ublk 00:19:16.513 ************************************ 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 [2024-11-06 07:55:38.023275] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:16.513 [2024-11-06 07:55:38.026134] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.513 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.513 [2024-11-06 07:55:38.448579] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:16.513 [2024-11-06 07:55:38.449196] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:16.514 [2024-11-06 07:55:38.449222] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:16.514 [2024-11-06 07:55:38.449238] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:16.514 [2024-11-06 07:55:38.460401] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:16.514 [2024-11-06 07:55:38.460470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:16.514 [2024-11-06 07:55:38.472361] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:16.514 [2024-11-06 07:55:38.473443] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:16.514 [2024-11-06 07:55:38.497302] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.514 [2024-11-06 07:55:38.795471] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:16.514 [2024-11-06 07:55:38.796001] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:16.514 [2024-11-06 07:55:38.796027] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:16.514 [2024-11-06 07:55:38.796038] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:16.514 [2024-11-06 07:55:38.803316] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:16.514 [2024-11-06 07:55:38.803343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:16.514 [2024-11-06 07:55:38.810287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:16.514 [2024-11-06 07:55:38.811178] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:16.514 [2024-11-06 07:55:38.820450] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.514 07:55:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:16.514 [2024-11-06 07:55:39.107433] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:16.514 [2024-11-06 07:55:39.107979] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:16.514 [2024-11-06 07:55:39.108001] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:16.514 [2024-11-06 07:55:39.108014] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:16.514 [2024-11-06 07:55:39.115322] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:16.514 [2024-11-06 07:55:39.115358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:16.514 [2024-11-06 07:55:39.123311] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:16.514 [2024-11-06 07:55:39.124336] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:16.514 [2024-11-06 07:55:39.132748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.514 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.081 [2024-11-06 07:55:39.428533] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:17.081 [2024-11-06 07:55:39.429106] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:17.081 [2024-11-06 07:55:39.429133] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:17.081 [2024-11-06 07:55:39.429143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:17.081 [2024-11-06 07:55:39.436780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:17.081 [2024-11-06 07:55:39.436823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:17.081 [2024-11-06 07:55:39.444332] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:17.081 [2024-11-06 07:55:39.445343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:17.081 [2024-11-06 07:55:39.451724] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:17.081 { 00:19:17.081 "ublk_device": "/dev/ublkb0", 00:19:17.081 "id": 0, 00:19:17.081 "queue_depth": 512, 00:19:17.081 "num_queues": 4, 00:19:17.081 "bdev_name": "Malloc0" 00:19:17.081 }, 00:19:17.081 { 00:19:17.081 "ublk_device": "/dev/ublkb1", 00:19:17.081 "id": 1, 00:19:17.081 "queue_depth": 512, 00:19:17.081 "num_queues": 4, 00:19:17.081 "bdev_name": "Malloc1" 00:19:17.081 }, 00:19:17.081 { 00:19:17.081 "ublk_device": "/dev/ublkb2", 00:19:17.081 "id": 2, 00:19:17.081 "queue_depth": 512, 00:19:17.081 "num_queues": 4, 00:19:17.081 "bdev_name": "Malloc2" 00:19:17.081 }, 00:19:17.081 { 00:19:17.081 "ublk_device": "/dev/ublkb3", 00:19:17.081 "id": 3, 00:19:17.081 "queue_depth": 512, 00:19:17.081 "num_queues": 4, 00:19:17.081 "bdev_name": "Malloc3" 00:19:17.081 } 00:19:17.081 ]' 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:17.081 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:17.340 07:55:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:17.598 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:17.857 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.115 [2024-11-06 07:55:40.521635] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:18.115 [2024-11-06 07:55:40.566965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:18.115 [2024-11-06 07:55:40.568777] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:18.115 [2024-11-06 07:55:40.574326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:18.115 [2024-11-06 07:55:40.574722] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:18.115 [2024-11-06 07:55:40.574740] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.115 [2024-11-06 07:55:40.590415] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:18.115 [2024-11-06 07:55:40.622387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:18.115 [2024-11-06 07:55:40.623766] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:18.115 [2024-11-06 07:55:40.631362] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:18.115 [2024-11-06 07:55:40.631749] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:18.115 [2024-11-06 07:55:40.631768] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.115 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.116 [2024-11-06 07:55:40.646485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:18.116 [2024-11-06 07:55:40.678922] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:18.116 [2024-11-06 07:55:40.680542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:18.116 [2024-11-06 07:55:40.689342] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:18.116 [2024-11-06 07:55:40.689778] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:18.116 [2024-11-06 07:55:40.689798] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:18.116 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.116 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:18.116 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:18.116 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.116 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:18.116 [2024-11-06 07:55:40.705456] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:18.374 [2024-11-06 07:55:40.744371] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:18.374 [2024-11-06 07:55:40.745632] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:18.374 [2024-11-06 07:55:40.752338] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:18.374 [2024-11-06 07:55:40.752733] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:18.374 [2024-11-06 07:55:40.752752] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:18.374 07:55:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.374 07:55:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:18.633 [2024-11-06 07:55:41.064417] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:18.633 [2024-11-06 07:55:41.073150] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:18.633 [2024-11-06 07:55:41.073218] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:18.633 07:55:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:18.633 07:55:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:18.633 07:55:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:18.633 07:55:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.633 07:55:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:19.201 07:55:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.201 07:55:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:19.201 07:55:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:19.201 07:55:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.201 07:55:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:19.795 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.795 07:55:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:19.796 07:55:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:19.796 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.796 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.054 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.054 07:55:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:20.055 07:55:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:20.055 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.055 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.622 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.622 07:55:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:20.622 07:55:42 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:20.622 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.622 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.622 07:55:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.622 07:55:42 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:20.622 07:55:42 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:20.622 00:19:20.622 real 0m5.072s 00:19:20.622 user 0m1.354s 00:19:20.622 sys 0m0.189s 00:19:20.622 ************************************ 00:19:20.622 END TEST test_create_multi_ublk 00:19:20.622 ************************************ 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:20.622 07:55:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:20.622 07:55:43 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:20.622 07:55:43 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:20.622 07:55:43 ublk -- ublk/ublk.sh@130 -- # killprocess 72943 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@950 -- # '[' -z 72943 ']' 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@954 -- # kill -0 72943 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@955 -- # uname 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72943 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:20.622 killing process with pid 72943 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72943' 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@969 -- # kill 72943 00:19:20.622 07:55:43 ublk -- common/autotest_common.sh@974 -- # wait 72943 00:19:21.996 [2024-11-06 07:55:44.307275] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:21.996 [2024-11-06 07:55:44.307393] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:23.370 00:19:23.370 real 0m31.729s 00:19:23.370 user 0m45.316s 00:19:23.370 sys 0m10.953s 00:19:23.370 07:55:45 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:23.370 07:55:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:23.370 ************************************ 00:19:23.370 END TEST ublk 00:19:23.370 ************************************ 00:19:23.370 07:55:45 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:23.370 07:55:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:23.370 07:55:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:23.370 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:19:23.370 ************************************ 00:19:23.370 START TEST ublk_recovery 00:19:23.370 ************************************ 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:23.370 * Looking for test storage... 00:19:23.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1689 -- # lcov --version 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.370 07:55:45 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:19:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.370 --rc genhtml_branch_coverage=1 00:19:23.370 --rc genhtml_function_coverage=1 00:19:23.370 --rc genhtml_legend=1 00:19:23.370 --rc geninfo_all_blocks=1 00:19:23.370 --rc geninfo_unexecuted_blocks=1 00:19:23.370 00:19:23.370 ' 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:19:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.370 --rc genhtml_branch_coverage=1 00:19:23.370 --rc genhtml_function_coverage=1 00:19:23.370 --rc genhtml_legend=1 00:19:23.370 --rc geninfo_all_blocks=1 00:19:23.370 --rc geninfo_unexecuted_blocks=1 00:19:23.370 00:19:23.370 ' 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:19:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.370 --rc genhtml_branch_coverage=1 00:19:23.370 --rc genhtml_function_coverage=1 00:19:23.370 --rc genhtml_legend=1 00:19:23.370 --rc geninfo_all_blocks=1 00:19:23.370 --rc geninfo_unexecuted_blocks=1 00:19:23.370 00:19:23.370 ' 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:19:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.370 --rc genhtml_branch_coverage=1 00:19:23.370 --rc genhtml_function_coverage=1 00:19:23.370 --rc genhtml_legend=1 00:19:23.370 --rc geninfo_all_blocks=1 00:19:23.370 --rc geninfo_unexecuted_blocks=1 00:19:23.370 00:19:23.370 ' 00:19:23.370 07:55:45 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:23.370 07:55:45 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:23.370 07:55:45 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:23.370 07:55:45 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:23.370 07:55:45 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:23.370 07:55:45 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:23.370 07:55:45 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:23.370 07:55:45 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:23.370 07:55:45 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:23.370 07:55:45 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:23.370 07:55:45 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73381 00:19:23.370 07:55:45 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:23.370 07:55:45 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.370 07:55:45 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73381 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73381 ']' 00:19:23.370 07:55:45 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.371 07:55:45 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.371 07:55:45 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.371 07:55:45 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.371 07:55:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:23.371 [2024-11-06 07:55:45.996786] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:23.371 [2024-11-06 07:55:45.997284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73381 ] 00:19:23.629 [2024-11-06 07:55:46.196987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:23.887 [2024-11-06 07:55:46.381910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.887 [2024-11-06 07:55:46.381992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.261 07:55:47 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.261 07:55:47 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:19:25.261 07:55:47 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:25.261 07:55:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.261 07:55:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.261 [2024-11-06 07:55:47.478300] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:25.261 [2024-11-06 07:55:47.481879] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:25.261 07:55:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.262 07:55:47 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:25.262 07:55:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.262 07:55:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.262 malloc0 00:19:25.262 07:55:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.262 07:55:47 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:25.262 07:55:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.262 07:55:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:25.262 [2024-11-06 07:55:47.646537] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:25.262 [2024-11-06 07:55:47.646711] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:25.262 [2024-11-06 07:55:47.646733] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:25.262 [2024-11-06 07:55:47.646746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:25.262 [2024-11-06 07:55:47.655469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:25.262 [2024-11-06 07:55:47.655513] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:25.262 [2024-11-06 07:55:47.662324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:25.262 [2024-11-06 07:55:47.662556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:25.262 [2024-11-06 07:55:47.676451] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:25.262 1 00:19:25.262 07:55:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.262 07:55:47 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:26.197 07:55:48 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73422 00:19:26.197 07:55:48 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:26.197 07:55:48 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:26.197 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.197 fio-3.35 00:19:26.197 Starting 1 process 00:19:31.473 07:55:53 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73381 00:19:31.473 07:55:53 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:19:36.742 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73381 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:19:36.742 07:55:58 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73523 00:19:36.742 07:55:58 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.742 07:55:58 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:36.742 07:55:58 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73523 00:19:36.742 07:55:58 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73523 ']' 00:19:36.742 07:55:58 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.742 07:55:58 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.742 07:55:58 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.742 07:55:58 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.742 07:55:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.742 [2024-11-06 07:55:58.829396] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:19:36.742 [2024-11-06 07:55:58.829581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73523 ] 00:19:36.742 [2024-11-06 07:55:59.024682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:36.742 [2024-11-06 07:55:59.187517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.742 [2024-11-06 07:55:59.187526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.673 07:56:00 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.673 07:56:00 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:19:37.673 07:56:00 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:19:37.673 07:56:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.673 07:56:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.673 [2024-11-06 07:56:00.182302] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:37.673 [2024-11-06 07:56:00.186046] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:37.673 07:56:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.673 07:56:00 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:37.673 07:56:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.673 07:56:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.931 malloc0 00:19:37.931 07:56:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.931 07:56:00 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:19:37.931 07:56:00 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.931 07:56:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.931 [2024-11-06 07:56:00.366570] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:19:37.931 [2024-11-06 07:56:00.366639] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:37.931 [2024-11-06 07:56:00.366659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:37.931 [2024-11-06 07:56:00.374390] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:37.931 [2024-11-06 07:56:00.374436] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:37.931 1 00:19:37.931 07:56:00 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.931 07:56:00 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73422 00:19:38.865 [2024-11-06 07:56:01.374479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:38.865 [2024-11-06 07:56:01.382365] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:38.865 [2024-11-06 07:56:01.382417] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:39.800 [2024-11-06 07:56:02.382495] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:39.800 [2024-11-06 07:56:02.386321] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:39.800 [2024-11-06 07:56:02.386356] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:41.181 [2024-11-06 07:56:03.386398] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:19:41.181 [2024-11-06 07:56:03.394320] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:19:41.181 [2024-11-06 07:56:03.394350] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:19:41.181 [2024-11-06 07:56:03.394369] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:19:41.181 [2024-11-06 07:56:03.394510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:03.110 [2024-11-06 07:56:24.149312] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:03.111 [2024-11-06 07:56:24.157427] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:03.111 [2024-11-06 07:56:24.165679] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:03.111 [2024-11-06 07:56:24.165730] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:29.712 00:20:29.712 fio_test: (groupid=0, jobs=1): err= 0: pid=73425: Wed Nov 6 07:56:48 2024 00:20:29.712 read: IOPS=8317, BW=32.5MiB/s (34.1MB/s)(1950MiB/60002msec) 00:20:29.712 slat (nsec): min=1832, max=722386, avg=6694.58, stdev=3244.79 00:20:29.712 clat (usec): min=1395, max=30488k, avg=7694.28, stdev=347856.83 00:20:29.712 lat (usec): min=1404, max=30488k, avg=7700.97, stdev=347856.82 00:20:29.712 clat percentiles (msec): 00:20:29.712 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 4], 00:20:29.712 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:20:29.712 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 5], 95.00th=[ 6], 00:20:29.712 | 99.00th=[ 8], 99.50th=[ 8], 99.90th=[ 14], 99.95th=[ 15], 00:20:29.712 | 99.99th=[17113] 00:20:29.712 bw ( KiB/s): min=20752, max=74840, per=100.00%, avg=66578.17, stdev=11362.79, samples=59 00:20:29.712 iops : min= 5188, max=18710, avg=16644.53, stdev=2840.69, samples=59 00:20:29.712 write: IOPS=8303, BW=32.4MiB/s (34.0MB/s)(1946MiB/60002msec); 0 zone resets 00:20:29.712 slat (usec): min=2, max=605, avg= 6.97, stdev= 3.33 00:20:29.712 clat (usec): min=1121, max=30488k, avg=7691.77, stdev=342756.68 00:20:29.712 lat (usec): min=1147, max=30488k, avg=7698.74, stdev=342756.69 00:20:29.712 clat percentiles (msec): 00:20:29.712 | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 4], 00:20:29.712 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:20:29.712 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 5], 95.00th=[ 6], 00:20:29.712 | 99.00th=[ 8], 99.50th=[ 8], 99.90th=[ 14], 99.95th=[ 15], 00:20:29.712 | 99.99th=[17113] 00:20:29.712 bw ( KiB/s): min=21520, max=74208, per=100.00%, avg=66468.25, stdev=11160.53, samples=59 00:20:29.712 iops : min= 5380, max=18552, avg=16617.03, stdev=2790.12, samples=59 00:20:29.712 lat (msec) : 2=0.02%, 4=86.35%, 10=13.39%, 20=0.22%, 50=0.01% 00:20:29.712 lat (msec) : >=2000=0.01% 00:20:29.712 cpu : usr=4.99%, sys=10.72%, ctx=33311, majf=0, minf=13 00:20:29.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:29.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:29.712 issued rwts: total=499086,498234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:29.712 00:20:29.712 Run status group 0 (all jobs): 00:20:29.712 READ: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=1950MiB (2044MB), run=60002-60002msec 00:20:29.712 WRITE: bw=32.4MiB/s (34.0MB/s), 32.4MiB/s-32.4MiB/s (34.0MB/s-34.0MB/s), io=1946MiB (2041MB), run=60002-60002msec 00:20:29.712 00:20:29.712 Disk stats (read/write): 00:20:29.712 ublkb1: ios=496921/496121, merge=0/0, ticks=3784144/3717394, in_queue=7501539, util=99.95% 00:20:29.712 07:56:48 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:29.712 07:56:48 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.712 07:56:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.712 [2024-11-06 07:56:48.976972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:29.712 [2024-11-06 07:56:49.025372] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:29.712 [2024-11-06 07:56:49.025697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:29.712 [2024-11-06 07:56:49.035287] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:29.712 [2024-11-06 07:56:49.035520] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:29.712 [2024-11-06 07:56:49.035537] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:29.712 07:56:49 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.712 07:56:49 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:29.712 07:56:49 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.712 07:56:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.712 [2024-11-06 07:56:49.043460] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:29.712 [2024-11-06 07:56:49.050280] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:29.712 [2024-11-06 07:56:49.050335] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:29.712 07:56:49 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.712 07:56:49 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:29.712 07:56:49 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:29.712 07:56:49 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73523 00:20:29.712 07:56:49 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 73523 ']' 00:20:29.712 07:56:49 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 73523 00:20:29.713 07:56:49 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:20:29.713 07:56:49 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:29.713 07:56:49 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73523 00:20:29.713 07:56:49 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:29.713 07:56:49 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:29.713 killing process with pid 73523 00:20:29.713 07:56:49 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73523' 00:20:29.713 07:56:49 ublk_recovery -- common/autotest_common.sh@969 -- # kill 73523 00:20:29.713 07:56:49 ublk_recovery -- common/autotest_common.sh@974 -- # wait 73523 00:20:29.713 [2024-11-06 07:56:50.643411] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:29.713 [2024-11-06 07:56:50.643538] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:29.713 00:20:29.713 real 1m6.511s 00:20:29.713 user 1m51.407s 00:20:29.713 sys 0m20.267s 00:20:29.713 07:56:52 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.713 ************************************ 00:20:29.713 END TEST ublk_recovery 00:20:29.713 07:56:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.713 ************************************ 00:20:29.713 07:56:52 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:29.713 07:56:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:29.713 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.713 07:56:52 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:20:29.713 07:56:52 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:29.713 07:56:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:29.713 07:56:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.713 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.713 ************************************ 00:20:29.713 START TEST ftl 00:20:29.713 ************************************ 00:20:29.713 07:56:52 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:29.972 * Looking for test storage... 00:20:29.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1689 -- # lcov --version 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:20:29.972 07:56:52 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.972 07:56:52 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.972 07:56:52 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.972 07:56:52 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.972 07:56:52 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.972 07:56:52 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.972 07:56:52 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.972 07:56:52 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.972 07:56:52 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.972 07:56:52 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.972 07:56:52 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.972 07:56:52 ftl -- scripts/common.sh@344 -- # case "$op" in 00:20:29.972 07:56:52 ftl -- scripts/common.sh@345 -- # : 1 00:20:29.972 07:56:52 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.972 07:56:52 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.972 07:56:52 ftl -- scripts/common.sh@365 -- # decimal 1 00:20:29.972 07:56:52 ftl -- scripts/common.sh@353 -- # local d=1 00:20:29.972 07:56:52 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.972 07:56:52 ftl -- scripts/common.sh@355 -- # echo 1 00:20:29.972 07:56:52 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.972 07:56:52 ftl -- scripts/common.sh@366 -- # decimal 2 00:20:29.972 07:56:52 ftl -- scripts/common.sh@353 -- # local d=2 00:20:29.972 07:56:52 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.972 07:56:52 ftl -- scripts/common.sh@355 -- # echo 2 00:20:29.972 07:56:52 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.972 07:56:52 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.972 07:56:52 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.972 07:56:52 ftl -- scripts/common.sh@368 -- # return 0 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:20:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.972 --rc genhtml_branch_coverage=1 00:20:29.972 --rc genhtml_function_coverage=1 00:20:29.972 --rc genhtml_legend=1 00:20:29.972 --rc geninfo_all_blocks=1 00:20:29.972 --rc geninfo_unexecuted_blocks=1 00:20:29.972 00:20:29.972 ' 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:20:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.972 --rc genhtml_branch_coverage=1 00:20:29.972 --rc genhtml_function_coverage=1 00:20:29.972 --rc genhtml_legend=1 00:20:29.972 --rc geninfo_all_blocks=1 00:20:29.972 --rc geninfo_unexecuted_blocks=1 00:20:29.972 00:20:29.972 ' 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:20:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.972 --rc genhtml_branch_coverage=1 00:20:29.972 --rc genhtml_function_coverage=1 00:20:29.972 --rc genhtml_legend=1 00:20:29.972 --rc geninfo_all_blocks=1 00:20:29.972 --rc geninfo_unexecuted_blocks=1 00:20:29.972 00:20:29.972 ' 00:20:29.972 07:56:52 ftl -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:20:29.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.972 --rc genhtml_branch_coverage=1 00:20:29.972 --rc genhtml_function_coverage=1 00:20:29.972 --rc genhtml_legend=1 00:20:29.972 --rc geninfo_all_blocks=1 00:20:29.972 --rc geninfo_unexecuted_blocks=1 00:20:29.972 00:20:29.972 ' 00:20:29.972 07:56:52 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:29.972 07:56:52 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:29.972 07:56:52 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.972 07:56:52 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:29.972 07:56:52 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:29.972 07:56:52 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:29.972 07:56:52 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.972 07:56:52 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:29.972 07:56:52 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:29.972 07:56:52 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.972 07:56:52 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.972 07:56:52 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:29.972 07:56:52 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:29.972 07:56:52 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:29.972 07:56:52 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:29.972 07:56:52 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:29.972 07:56:52 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:29.972 07:56:52 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.972 07:56:52 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:29.972 07:56:52 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:29.972 07:56:52 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:29.972 07:56:52 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:29.972 07:56:52 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:29.972 07:56:52 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:29.972 07:56:52 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:29.972 07:56:52 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:29.972 07:56:52 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:29.972 07:56:52 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:29.972 07:56:52 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:29.972 07:56:52 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.972 07:56:52 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:29.972 07:56:52 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:29.972 07:56:52 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:29.972 07:56:52 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:29.972 07:56:52 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:30.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:30.490 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:30.490 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:30.490 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:30.490 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:30.490 07:56:53 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74330 00:20:30.490 07:56:53 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74330 00:20:30.490 07:56:53 ftl -- common/autotest_common.sh@831 -- # '[' -z 74330 ']' 00:20:30.490 07:56:53 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:30.490 07:56:53 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.490 07:56:53 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:30.490 07:56:53 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.491 07:56:53 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:30.491 07:56:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:30.750 [2024-11-06 07:56:53.169751] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:30.750 [2024-11-06 07:56:53.169982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74330 ] 00:20:30.750 [2024-11-06 07:56:53.366053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.009 [2024-11-06 07:56:53.521103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.576 07:56:54 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:31.576 07:56:54 ftl -- common/autotest_common.sh@864 -- # return 0 00:20:31.576 07:56:54 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:31.835 07:56:54 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:33.212 07:56:55 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:20:33.212 07:56:55 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:33.471 07:56:56 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:20:33.471 07:56:56 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:33.471 07:56:56 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@50 -- # break 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@63 -- # break 00:20:34.037 07:56:56 ftl -- ftl/ftl.sh@66 -- # killprocess 74330 00:20:34.037 07:56:56 ftl -- common/autotest_common.sh@950 -- # '[' -z 74330 ']' 00:20:34.037 07:56:56 ftl -- common/autotest_common.sh@954 -- # kill -0 74330 00:20:34.037 07:56:56 ftl -- common/autotest_common.sh@955 -- # uname 00:20:34.037 07:56:56 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:34.037 07:56:56 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74330 00:20:34.296 07:56:56 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:34.296 killing process with pid 74330 00:20:34.296 07:56:56 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:34.297 07:56:56 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74330' 00:20:34.297 07:56:56 ftl -- common/autotest_common.sh@969 -- # kill 74330 00:20:34.297 07:56:56 ftl -- common/autotest_common.sh@974 -- # wait 74330 00:20:36.830 07:56:58 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:20:36.830 07:56:58 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:36.830 07:56:58 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:36.830 07:56:58 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.830 07:56:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:36.830 ************************************ 00:20:36.830 START TEST ftl_fio_basic 00:20:36.830 ************************************ 00:20:36.830 07:56:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:20:36.830 * Looking for test storage... 00:20:36.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:36.830 07:56:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:20:36.830 07:56:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # lcov --version 00:20:36.830 07:56:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:20:36.830 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:20:36.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.831 --rc genhtml_branch_coverage=1 00:20:36.831 --rc genhtml_function_coverage=1 00:20:36.831 --rc genhtml_legend=1 00:20:36.831 --rc geninfo_all_blocks=1 00:20:36.831 --rc geninfo_unexecuted_blocks=1 00:20:36.831 00:20:36.831 ' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:20:36.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.831 --rc genhtml_branch_coverage=1 00:20:36.831 --rc genhtml_function_coverage=1 00:20:36.831 --rc genhtml_legend=1 00:20:36.831 --rc geninfo_all_blocks=1 00:20:36.831 --rc geninfo_unexecuted_blocks=1 00:20:36.831 00:20:36.831 ' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:20:36.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.831 --rc genhtml_branch_coverage=1 00:20:36.831 --rc genhtml_function_coverage=1 00:20:36.831 --rc genhtml_legend=1 00:20:36.831 --rc geninfo_all_blocks=1 00:20:36.831 --rc geninfo_unexecuted_blocks=1 00:20:36.831 00:20:36.831 ' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:20:36.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.831 --rc genhtml_branch_coverage=1 00:20:36.831 --rc genhtml_function_coverage=1 00:20:36.831 --rc genhtml_legend=1 00:20:36.831 --rc geninfo_all_blocks=1 00:20:36.831 --rc geninfo_unexecuted_blocks=1 00:20:36.831 00:20:36.831 ' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74473 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74473 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 74473 ']' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.831 07:56:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:36.831 [2024-11-06 07:56:59.233999] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:20:36.831 [2024-11-06 07:56:59.234201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74473 ] 00:20:36.831 [2024-11-06 07:56:59.427933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:37.090 [2024-11-06 07:56:59.587291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.090 [2024-11-06 07:56:59.587403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.090 [2024-11-06 07:56:59.587414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.025 07:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.025 07:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:20:38.025 07:57:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:38.025 07:57:00 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:20:38.025 07:57:00 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:38.025 07:57:00 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:20:38.025 07:57:00 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:20:38.025 07:57:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:38.284 07:57:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:38.284 07:57:00 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:20:38.284 07:57:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:38.284 07:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:38.284 07:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:38.284 07:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:20:38.284 07:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:20:38.284 07:57:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:38.543 07:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:38.543 { 00:20:38.543 "name": "nvme0n1", 00:20:38.543 "aliases": [ 00:20:38.543 "d38579a7-3fe5-4204-93f8-d801790ed779" 00:20:38.543 ], 00:20:38.543 "product_name": "NVMe disk", 00:20:38.543 "block_size": 4096, 00:20:38.543 "num_blocks": 1310720, 00:20:38.543 "uuid": "d38579a7-3fe5-4204-93f8-d801790ed779", 00:20:38.543 "numa_id": -1, 00:20:38.543 "assigned_rate_limits": { 00:20:38.543 "rw_ios_per_sec": 0, 00:20:38.543 "rw_mbytes_per_sec": 0, 00:20:38.543 "r_mbytes_per_sec": 0, 00:20:38.543 "w_mbytes_per_sec": 0 00:20:38.543 }, 00:20:38.543 "claimed": false, 00:20:38.543 "zoned": false, 00:20:38.543 "supported_io_types": { 00:20:38.543 "read": true, 00:20:38.543 "write": true, 00:20:38.543 "unmap": true, 00:20:38.543 "flush": true, 00:20:38.543 "reset": true, 00:20:38.543 "nvme_admin": true, 00:20:38.543 "nvme_io": true, 00:20:38.543 "nvme_io_md": false, 00:20:38.543 "write_zeroes": true, 00:20:38.543 "zcopy": false, 00:20:38.543 "get_zone_info": false, 00:20:38.543 "zone_management": false, 00:20:38.543 "zone_append": false, 00:20:38.543 "compare": true, 00:20:38.543 "compare_and_write": false, 00:20:38.543 "abort": true, 00:20:38.543 "seek_hole": false, 00:20:38.543 "seek_data": false, 00:20:38.543 "copy": true, 00:20:38.543 "nvme_iov_md": false 00:20:38.543 }, 00:20:38.543 "driver_specific": { 00:20:38.543 "nvme": [ 00:20:38.543 { 00:20:38.543 "pci_address": "0000:00:11.0", 00:20:38.543 "trid": { 00:20:38.543 "trtype": "PCIe", 00:20:38.543 "traddr": "0000:00:11.0" 00:20:38.543 }, 00:20:38.543 "ctrlr_data": { 00:20:38.543 "cntlid": 0, 00:20:38.543 "vendor_id": "0x1b36", 00:20:38.543 "model_number": "QEMU NVMe Ctrl", 00:20:38.543 "serial_number": "12341", 00:20:38.543 "firmware_revision": "8.0.0", 00:20:38.543 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:38.543 "oacs": { 00:20:38.543 "security": 0, 00:20:38.543 "format": 1, 00:20:38.543 "firmware": 0, 00:20:38.543 "ns_manage": 1 00:20:38.543 }, 00:20:38.543 "multi_ctrlr": false, 00:20:38.543 "ana_reporting": false 00:20:38.543 }, 00:20:38.543 "vs": { 00:20:38.543 "nvme_version": "1.4" 00:20:38.543 }, 00:20:38.543 "ns_data": { 00:20:38.543 "id": 1, 00:20:38.543 "can_share": false 00:20:38.543 } 00:20:38.543 } 00:20:38.543 ], 00:20:38.543 "mp_policy": "active_passive" 00:20:38.543 } 00:20:38.543 } 00:20:38.543 ]' 00:20:38.543 07:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:38.802 07:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:20:38.802 07:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:38.802 07:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:38.803 07:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:38.803 07:57:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:20:38.803 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:20:38.803 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:38.803 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:20:38.803 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:38.803 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:39.061 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:20:39.061 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:39.629 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ba0ad3af-6f84-48a7-876b-31ab1bf8e75b 00:20:39.629 07:57:01 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ba0ad3af-6f84-48a7-876b-31ab1bf8e75b 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=f1b4422a-eedd-4731-af48-796762d36b97 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f1b4422a-eedd-4731-af48-796762d36b97 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=f1b4422a-eedd-4731-af48-796762d36b97 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size f1b4422a-eedd-4731-af48-796762d36b97 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=f1b4422a-eedd-4731-af48-796762d36b97 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:20:39.888 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f1b4422a-eedd-4731-af48-796762d36b97 00:20:40.147 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:40.147 { 00:20:40.147 "name": "f1b4422a-eedd-4731-af48-796762d36b97", 00:20:40.147 "aliases": [ 00:20:40.147 "lvs/nvme0n1p0" 00:20:40.147 ], 00:20:40.147 "product_name": "Logical Volume", 00:20:40.147 "block_size": 4096, 00:20:40.147 "num_blocks": 26476544, 00:20:40.147 "uuid": "f1b4422a-eedd-4731-af48-796762d36b97", 00:20:40.147 "assigned_rate_limits": { 00:20:40.147 "rw_ios_per_sec": 0, 00:20:40.147 "rw_mbytes_per_sec": 0, 00:20:40.147 "r_mbytes_per_sec": 0, 00:20:40.147 "w_mbytes_per_sec": 0 00:20:40.147 }, 00:20:40.147 "claimed": false, 00:20:40.147 "zoned": false, 00:20:40.147 "supported_io_types": { 00:20:40.147 "read": true, 00:20:40.147 "write": true, 00:20:40.147 "unmap": true, 00:20:40.147 "flush": false, 00:20:40.147 "reset": true, 00:20:40.147 "nvme_admin": false, 00:20:40.147 "nvme_io": false, 00:20:40.147 "nvme_io_md": false, 00:20:40.147 "write_zeroes": true, 00:20:40.147 "zcopy": false, 00:20:40.147 "get_zone_info": false, 00:20:40.147 "zone_management": false, 00:20:40.147 "zone_append": false, 00:20:40.147 "compare": false, 00:20:40.147 "compare_and_write": false, 00:20:40.147 "abort": false, 00:20:40.147 "seek_hole": true, 00:20:40.147 "seek_data": true, 00:20:40.147 "copy": false, 00:20:40.147 "nvme_iov_md": false 00:20:40.147 }, 00:20:40.147 "driver_specific": { 00:20:40.147 "lvol": { 00:20:40.147 "lvol_store_uuid": "ba0ad3af-6f84-48a7-876b-31ab1bf8e75b", 00:20:40.147 "base_bdev": "nvme0n1", 00:20:40.147 "thin_provision": true, 00:20:40.147 "num_allocated_clusters": 0, 00:20:40.147 "snapshot": false, 00:20:40.147 "clone": false, 00:20:40.147 "esnap_clone": false 00:20:40.147 } 00:20:40.147 } 00:20:40.147 } 00:20:40.147 ]' 00:20:40.147 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:40.147 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:20:40.147 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:40.406 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:40.406 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:40.406 07:57:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:20:40.406 07:57:02 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:20:40.406 07:57:02 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:20:40.406 07:57:02 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:40.663 07:57:03 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:40.663 07:57:03 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:40.663 07:57:03 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size f1b4422a-eedd-4731-af48-796762d36b97 00:20:40.663 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=f1b4422a-eedd-4731-af48-796762d36b97 00:20:40.663 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:40.663 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:20:40.663 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:20:40.663 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f1b4422a-eedd-4731-af48-796762d36b97 00:20:40.921 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:40.921 { 00:20:40.921 "name": "f1b4422a-eedd-4731-af48-796762d36b97", 00:20:40.921 "aliases": [ 00:20:40.921 "lvs/nvme0n1p0" 00:20:40.921 ], 00:20:40.921 "product_name": "Logical Volume", 00:20:40.921 "block_size": 4096, 00:20:40.922 "num_blocks": 26476544, 00:20:40.922 "uuid": "f1b4422a-eedd-4731-af48-796762d36b97", 00:20:40.922 "assigned_rate_limits": { 00:20:40.922 "rw_ios_per_sec": 0, 00:20:40.922 "rw_mbytes_per_sec": 0, 00:20:40.922 "r_mbytes_per_sec": 0, 00:20:40.922 "w_mbytes_per_sec": 0 00:20:40.922 }, 00:20:40.922 "claimed": false, 00:20:40.922 "zoned": false, 00:20:40.922 "supported_io_types": { 00:20:40.922 "read": true, 00:20:40.922 "write": true, 00:20:40.922 "unmap": true, 00:20:40.922 "flush": false, 00:20:40.922 "reset": true, 00:20:40.922 "nvme_admin": false, 00:20:40.922 "nvme_io": false, 00:20:40.922 "nvme_io_md": false, 00:20:40.922 "write_zeroes": true, 00:20:40.922 "zcopy": false, 00:20:40.922 "get_zone_info": false, 00:20:40.922 "zone_management": false, 00:20:40.922 "zone_append": false, 00:20:40.922 "compare": false, 00:20:40.922 "compare_and_write": false, 00:20:40.922 "abort": false, 00:20:40.922 "seek_hole": true, 00:20:40.922 "seek_data": true, 00:20:40.922 "copy": false, 00:20:40.922 "nvme_iov_md": false 00:20:40.922 }, 00:20:40.922 "driver_specific": { 00:20:40.922 "lvol": { 00:20:40.922 "lvol_store_uuid": "ba0ad3af-6f84-48a7-876b-31ab1bf8e75b", 00:20:40.922 "base_bdev": "nvme0n1", 00:20:40.922 "thin_provision": true, 00:20:40.922 "num_allocated_clusters": 0, 00:20:40.922 "snapshot": false, 00:20:40.922 "clone": false, 00:20:40.922 "esnap_clone": false 00:20:40.922 } 00:20:40.922 } 00:20:40.922 } 00:20:40.922 ]' 00:20:40.922 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:40.922 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:20:40.922 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:40.922 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:40.922 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:40.922 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:20:40.922 07:57:03 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:20:40.922 07:57:03 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:20:41.181 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size f1b4422a-eedd-4731-af48-796762d36b97 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=f1b4422a-eedd-4731-af48-796762d36b97 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:20:41.181 07:57:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f1b4422a-eedd-4731-af48-796762d36b97 00:20:41.747 07:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:41.747 { 00:20:41.747 "name": "f1b4422a-eedd-4731-af48-796762d36b97", 00:20:41.747 "aliases": [ 00:20:41.747 "lvs/nvme0n1p0" 00:20:41.747 ], 00:20:41.747 "product_name": "Logical Volume", 00:20:41.747 "block_size": 4096, 00:20:41.747 "num_blocks": 26476544, 00:20:41.747 "uuid": "f1b4422a-eedd-4731-af48-796762d36b97", 00:20:41.747 "assigned_rate_limits": { 00:20:41.747 "rw_ios_per_sec": 0, 00:20:41.747 "rw_mbytes_per_sec": 0, 00:20:41.747 "r_mbytes_per_sec": 0, 00:20:41.747 "w_mbytes_per_sec": 0 00:20:41.747 }, 00:20:41.747 "claimed": false, 00:20:41.747 "zoned": false, 00:20:41.747 "supported_io_types": { 00:20:41.747 "read": true, 00:20:41.747 "write": true, 00:20:41.748 "unmap": true, 00:20:41.748 "flush": false, 00:20:41.748 "reset": true, 00:20:41.748 "nvme_admin": false, 00:20:41.748 "nvme_io": false, 00:20:41.748 "nvme_io_md": false, 00:20:41.748 "write_zeroes": true, 00:20:41.748 "zcopy": false, 00:20:41.748 "get_zone_info": false, 00:20:41.748 "zone_management": false, 00:20:41.748 "zone_append": false, 00:20:41.748 "compare": false, 00:20:41.748 "compare_and_write": false, 00:20:41.748 "abort": false, 00:20:41.748 "seek_hole": true, 00:20:41.748 "seek_data": true, 00:20:41.748 "copy": false, 00:20:41.748 "nvme_iov_md": false 00:20:41.748 }, 00:20:41.748 "driver_specific": { 00:20:41.748 "lvol": { 00:20:41.748 "lvol_store_uuid": "ba0ad3af-6f84-48a7-876b-31ab1bf8e75b", 00:20:41.748 "base_bdev": "nvme0n1", 00:20:41.748 "thin_provision": true, 00:20:41.748 "num_allocated_clusters": 0, 00:20:41.748 "snapshot": false, 00:20:41.748 "clone": false, 00:20:41.748 "esnap_clone": false 00:20:41.748 } 00:20:41.748 } 00:20:41.748 } 00:20:41.748 ]' 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:20:41.748 07:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f1b4422a-eedd-4731-af48-796762d36b97 -c nvc0n1p0 --l2p_dram_limit 60 00:20:42.007 [2024-11-06 07:57:04.572052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.007 [2024-11-06 07:57:04.572121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:42.007 [2024-11-06 07:57:04.572148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:42.007 [2024-11-06 07:57:04.572162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.007 [2024-11-06 07:57:04.572306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.007 [2024-11-06 07:57:04.572330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.007 [2024-11-06 07:57:04.572352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:42.007 [2024-11-06 07:57:04.572365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.007 [2024-11-06 07:57:04.572417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:42.007 [2024-11-06 07:57:04.573447] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:42.008 [2024-11-06 07:57:04.573495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.573510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.008 [2024-11-06 07:57:04.573526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:20:42.008 [2024-11-06 07:57:04.573539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.573761] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 42e6667d-37dc-4f85-9cb6-f4ddc328e29b 00:20:42.008 [2024-11-06 07:57:04.575648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.575694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:42.008 [2024-11-06 07:57:04.575717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:42.008 [2024-11-06 07:57:04.575732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.585310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.585390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.008 [2024-11-06 07:57:04.585410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.464 ms 00:20:42.008 [2024-11-06 07:57:04.585433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.585604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.585634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.008 [2024-11-06 07:57:04.585649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:42.008 [2024-11-06 07:57:04.585670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.585781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.585811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:42.008 [2024-11-06 07:57:04.585826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:42.008 [2024-11-06 07:57:04.585841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.585893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:42.008 [2024-11-06 07:57:04.591138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.591182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.008 [2024-11-06 07:57:04.591202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.258 ms 00:20:42.008 [2024-11-06 07:57:04.591215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.591298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.591317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:42.008 [2024-11-06 07:57:04.591334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:42.008 [2024-11-06 07:57:04.591346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.591440] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:42.008 [2024-11-06 07:57:04.591636] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:42.008 [2024-11-06 07:57:04.591678] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:42.008 [2024-11-06 07:57:04.591696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:42.008 [2024-11-06 07:57:04.591716] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:42.008 [2024-11-06 07:57:04.591730] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:42.008 [2024-11-06 07:57:04.591746] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:42.008 [2024-11-06 07:57:04.591758] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:42.008 [2024-11-06 07:57:04.591772] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:42.008 [2024-11-06 07:57:04.591784] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:42.008 [2024-11-06 07:57:04.591805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.591818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:42.008 [2024-11-06 07:57:04.591840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:20:42.008 [2024-11-06 07:57:04.591852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.591987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.008 [2024-11-06 07:57:04.592013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:42.008 [2024-11-06 07:57:04.592030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:42.008 [2024-11-06 07:57:04.592043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.008 [2024-11-06 07:57:04.592196] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:42.008 [2024-11-06 07:57:04.592230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:42.008 [2024-11-06 07:57:04.592260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:42.008 [2024-11-06 07:57:04.592309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:42.008 [2024-11-06 07:57:04.592350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.008 [2024-11-06 07:57:04.592384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:42.008 [2024-11-06 07:57:04.592396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:42.008 [2024-11-06 07:57:04.592410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:42.008 [2024-11-06 07:57:04.592422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:42.008 [2024-11-06 07:57:04.592436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:42.008 [2024-11-06 07:57:04.592447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:42.008 [2024-11-06 07:57:04.592478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:42.008 [2024-11-06 07:57:04.592517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:42.008 [2024-11-06 07:57:04.592554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:42.008 [2024-11-06 07:57:04.592593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:42.008 [2024-11-06 07:57:04.592629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:42.008 [2024-11-06 07:57:04.592671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.008 [2024-11-06 07:57:04.592701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:42.008 [2024-11-06 07:57:04.592733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:42.008 [2024-11-06 07:57:04.592748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:42.008 [2024-11-06 07:57:04.592760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:42.008 [2024-11-06 07:57:04.592774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:42.008 [2024-11-06 07:57:04.592786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:42.008 [2024-11-06 07:57:04.592814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:42.008 [2024-11-06 07:57:04.592828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:42.008 [2024-11-06 07:57:04.592854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:42.008 [2024-11-06 07:57:04.592867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:42.008 [2024-11-06 07:57:04.592894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:42.008 [2024-11-06 07:57:04.592911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:42.008 [2024-11-06 07:57:04.592934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:42.008 [2024-11-06 07:57:04.592950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:42.008 [2024-11-06 07:57:04.592962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:42.008 [2024-11-06 07:57:04.592976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:42.008 [2024-11-06 07:57:04.592993] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:42.008 [2024-11-06 07:57:04.593012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.008 [2024-11-06 07:57:04.593025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:42.009 [2024-11-06 07:57:04.593040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:42.009 [2024-11-06 07:57:04.593053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:42.009 [2024-11-06 07:57:04.593067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:42.009 [2024-11-06 07:57:04.593079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:42.009 [2024-11-06 07:57:04.593094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:42.009 [2024-11-06 07:57:04.593108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:42.009 [2024-11-06 07:57:04.593123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:42.009 [2024-11-06 07:57:04.593135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:42.009 [2024-11-06 07:57:04.593154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:42.009 [2024-11-06 07:57:04.593171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:42.009 [2024-11-06 07:57:04.593186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:42.009 [2024-11-06 07:57:04.593198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:42.009 [2024-11-06 07:57:04.593214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:42.009 [2024-11-06 07:57:04.593226] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:42.009 [2024-11-06 07:57:04.593242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:42.009 [2024-11-06 07:57:04.593269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:42.009 [2024-11-06 07:57:04.593286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:42.009 [2024-11-06 07:57:04.593299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:42.009 [2024-11-06 07:57:04.593314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:42.009 [2024-11-06 07:57:04.593328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.009 [2024-11-06 07:57:04.593343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:42.009 [2024-11-06 07:57:04.593359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:20:42.009 [2024-11-06 07:57:04.593374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.009 [2024-11-06 07:57:04.593463] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:42.009 [2024-11-06 07:57:04.593487] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:45.340 [2024-11-06 07:57:07.780984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.781072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:45.340 [2024-11-06 07:57:07.781095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3187.537 ms 00:20:45.340 [2024-11-06 07:57:07.781127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.340 [2024-11-06 07:57:07.821217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.821327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.340 [2024-11-06 07:57:07.821351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.722 ms 00:20:45.340 [2024-11-06 07:57:07.821367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.340 [2024-11-06 07:57:07.821589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.821615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:45.340 [2024-11-06 07:57:07.821630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:45.340 [2024-11-06 07:57:07.821648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.340 [2024-11-06 07:57:07.885649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.885749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.340 [2024-11-06 07:57:07.885772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.878 ms 00:20:45.340 [2024-11-06 07:57:07.885795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.340 [2024-11-06 07:57:07.885887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.885907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.340 [2024-11-06 07:57:07.885921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:45.340 [2024-11-06 07:57:07.885936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.340 [2024-11-06 07:57:07.886622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.886656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.340 [2024-11-06 07:57:07.886672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:20:45.340 [2024-11-06 07:57:07.886687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.340 [2024-11-06 07:57:07.886902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.886933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.340 [2024-11-06 07:57:07.886948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:20:45.340 [2024-11-06 07:57:07.886966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.340 [2024-11-06 07:57:07.909305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.909381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.340 [2024-11-06 07:57:07.909403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.292 ms 00:20:45.340 [2024-11-06 07:57:07.909419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.340 [2024-11-06 07:57:07.925826] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:45.340 [2024-11-06 07:57:07.947417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.340 [2024-11-06 07:57:07.947534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:45.340 [2024-11-06 07:57:07.947562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.775 ms 00:20:45.340 [2024-11-06 07:57:07.947576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.600 [2024-11-06 07:57:08.008875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.600 [2024-11-06 07:57:08.008969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:45.600 [2024-11-06 07:57:08.008995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.189 ms 00:20:45.600 [2024-11-06 07:57:08.009010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.600 [2024-11-06 07:57:08.009364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.600 [2024-11-06 07:57:08.009400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:45.600 [2024-11-06 07:57:08.009432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:45.600 [2024-11-06 07:57:08.009445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.600 [2024-11-06 07:57:08.042598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.600 [2024-11-06 07:57:08.042675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:45.600 [2024-11-06 07:57:08.042701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.030 ms 00:20:45.600 [2024-11-06 07:57:08.042718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.600 [2024-11-06 07:57:08.074881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.600 [2024-11-06 07:57:08.074958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:45.600 [2024-11-06 07:57:08.074985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.043 ms 00:20:45.600 [2024-11-06 07:57:08.074999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.600 [2024-11-06 07:57:08.075974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.600 [2024-11-06 07:57:08.076031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:45.600 [2024-11-06 07:57:08.076051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:20:45.600 [2024-11-06 07:57:08.076063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.600 [2024-11-06 07:57:08.167500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.600 [2024-11-06 07:57:08.167582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:45.600 [2024-11-06 07:57:08.167612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.307 ms 00:20:45.600 [2024-11-06 07:57:08.167626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.600 [2024-11-06 07:57:08.202493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.600 [2024-11-06 07:57:08.202578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:45.600 [2024-11-06 07:57:08.202605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.596 ms 00:20:45.600 [2024-11-06 07:57:08.202619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.858 [2024-11-06 07:57:08.236845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.858 [2024-11-06 07:57:08.236937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:45.858 [2024-11-06 07:57:08.236975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.087 ms 00:20:45.858 [2024-11-06 07:57:08.236989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.858 [2024-11-06 07:57:08.270888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.858 [2024-11-06 07:57:08.270960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:45.858 [2024-11-06 07:57:08.270986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.768 ms 00:20:45.858 [2024-11-06 07:57:08.270999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.858 [2024-11-06 07:57:08.271130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.858 [2024-11-06 07:57:08.271150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:45.858 [2024-11-06 07:57:08.271171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:45.858 [2024-11-06 07:57:08.271184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.858 [2024-11-06 07:57:08.271465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.858 [2024-11-06 07:57:08.271502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:45.858 [2024-11-06 07:57:08.271521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:45.858 [2024-11-06 07:57:08.271534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.858 [2024-11-06 07:57:08.273145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3700.467 ms, result 0 00:20:45.858 { 00:20:45.858 "name": "ftl0", 00:20:45.858 "uuid": "42e6667d-37dc-4f85-9cb6-f4ddc328e29b" 00:20:45.858 } 00:20:45.858 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:20:45.858 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:20:45.858 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:45.858 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:20:45.858 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:45.858 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:45.858 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:46.117 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:46.375 [ 00:20:46.375 { 00:20:46.375 "name": "ftl0", 00:20:46.375 "aliases": [ 00:20:46.375 "42e6667d-37dc-4f85-9cb6-f4ddc328e29b" 00:20:46.375 ], 00:20:46.375 "product_name": "FTL disk", 00:20:46.375 "block_size": 4096, 00:20:46.375 "num_blocks": 20971520, 00:20:46.375 "uuid": "42e6667d-37dc-4f85-9cb6-f4ddc328e29b", 00:20:46.375 "assigned_rate_limits": { 00:20:46.375 "rw_ios_per_sec": 0, 00:20:46.375 "rw_mbytes_per_sec": 0, 00:20:46.375 "r_mbytes_per_sec": 0, 00:20:46.375 "w_mbytes_per_sec": 0 00:20:46.375 }, 00:20:46.375 "claimed": false, 00:20:46.375 "zoned": false, 00:20:46.375 "supported_io_types": { 00:20:46.375 "read": true, 00:20:46.375 "write": true, 00:20:46.375 "unmap": true, 00:20:46.375 "flush": true, 00:20:46.375 "reset": false, 00:20:46.375 "nvme_admin": false, 00:20:46.375 "nvme_io": false, 00:20:46.375 "nvme_io_md": false, 00:20:46.375 "write_zeroes": true, 00:20:46.375 "zcopy": false, 00:20:46.375 "get_zone_info": false, 00:20:46.375 "zone_management": false, 00:20:46.375 "zone_append": false, 00:20:46.375 "compare": false, 00:20:46.375 "compare_and_write": false, 00:20:46.375 "abort": false, 00:20:46.375 "seek_hole": false, 00:20:46.375 "seek_data": false, 00:20:46.375 "copy": false, 00:20:46.375 "nvme_iov_md": false 00:20:46.375 }, 00:20:46.375 "driver_specific": { 00:20:46.375 "ftl": { 00:20:46.375 "base_bdev": "f1b4422a-eedd-4731-af48-796762d36b97", 00:20:46.375 "cache": "nvc0n1p0" 00:20:46.375 } 00:20:46.375 } 00:20:46.375 } 00:20:46.375 ] 00:20:46.375 07:57:08 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:20:46.375 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:20:46.375 07:57:08 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:46.634 07:57:09 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:20:46.634 07:57:09 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:46.893 [2024-11-06 07:57:09.467625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.893 [2024-11-06 07:57:09.467959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:46.893 [2024-11-06 07:57:09.467994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:46.893 [2024-11-06 07:57:09.468012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.893 [2024-11-06 07:57:09.468087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.893 [2024-11-06 07:57:09.471896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.893 [2024-11-06 07:57:09.471934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:46.893 [2024-11-06 07:57:09.471954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.778 ms 00:20:46.893 [2024-11-06 07:57:09.471967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.893 [2024-11-06 07:57:09.472761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.893 [2024-11-06 07:57:09.472796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:46.893 [2024-11-06 07:57:09.472816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:20:46.893 [2024-11-06 07:57:09.472829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.893 [2024-11-06 07:57:09.476031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.893 [2024-11-06 07:57:09.476061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:46.893 [2024-11-06 07:57:09.476084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.161 ms 00:20:46.893 [2024-11-06 07:57:09.476097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.893 [2024-11-06 07:57:09.482756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.893 [2024-11-06 07:57:09.482998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:46.893 [2024-11-06 07:57:09.483036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.580 ms 00:20:46.893 [2024-11-06 07:57:09.483051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.893 [2024-11-06 07:57:09.516595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.893 [2024-11-06 07:57:09.516697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:46.893 [2024-11-06 07:57:09.516734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.373 ms 00:20:46.893 [2024-11-06 07:57:09.516747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.154 [2024-11-06 07:57:09.537608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.154 [2024-11-06 07:57:09.537716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:47.154 [2024-11-06 07:57:09.537742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.707 ms 00:20:47.154 [2024-11-06 07:57:09.537755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.154 [2024-11-06 07:57:09.538096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.154 [2024-11-06 07:57:09.538119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:47.154 [2024-11-06 07:57:09.538137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:20:47.154 [2024-11-06 07:57:09.538150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.154 [2024-11-06 07:57:09.572351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.154 [2024-11-06 07:57:09.572698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:47.154 [2024-11-06 07:57:09.572740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.145 ms 00:20:47.154 [2024-11-06 07:57:09.572754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.154 [2024-11-06 07:57:09.606686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.154 [2024-11-06 07:57:09.606772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:47.154 [2024-11-06 07:57:09.606798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.828 ms 00:20:47.154 [2024-11-06 07:57:09.606810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.154 [2024-11-06 07:57:09.640013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.154 [2024-11-06 07:57:09.640082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:47.154 [2024-11-06 07:57:09.640107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.096 ms 00:20:47.155 [2024-11-06 07:57:09.640120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.155 [2024-11-06 07:57:09.673097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.155 [2024-11-06 07:57:09.673181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:47.155 [2024-11-06 07:57:09.673206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.703 ms 00:20:47.155 [2024-11-06 07:57:09.673220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.155 [2024-11-06 07:57:09.673349] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:47.155 [2024-11-06 07:57:09.673377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.673999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:47.155 [2024-11-06 07:57:09.674685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:47.156 [2024-11-06 07:57:09.674947] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:47.156 [2024-11-06 07:57:09.674963] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 42e6667d-37dc-4f85-9cb6-f4ddc328e29b 00:20:47.156 [2024-11-06 07:57:09.674976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:47.156 [2024-11-06 07:57:09.674993] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:47.156 [2024-11-06 07:57:09.675005] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:47.156 [2024-11-06 07:57:09.675020] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:47.156 [2024-11-06 07:57:09.675032] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:47.156 [2024-11-06 07:57:09.675052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:47.156 [2024-11-06 07:57:09.675064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:47.156 [2024-11-06 07:57:09.675077] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:47.156 [2024-11-06 07:57:09.675089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:47.156 [2024-11-06 07:57:09.675104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.156 [2024-11-06 07:57:09.675117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:47.156 [2024-11-06 07:57:09.675133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:20:47.156 [2024-11-06 07:57:09.675145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.156 [2024-11-06 07:57:09.693555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.156 [2024-11-06 07:57:09.693788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:47.156 [2024-11-06 07:57:09.693913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.230 ms 00:20:47.156 [2024-11-06 07:57:09.694009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.156 [2024-11-06 07:57:09.694569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.156 [2024-11-06 07:57:09.694589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:47.156 [2024-11-06 07:57:09.694607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:20:47.156 [2024-11-06 07:57:09.694620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.156 [2024-11-06 07:57:09.755357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.156 [2024-11-06 07:57:09.755829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.156 [2024-11-06 07:57:09.755952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.156 [2024-11-06 07:57:09.756036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.156 [2024-11-06 07:57:09.756222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.156 [2024-11-06 07:57:09.756352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.156 [2024-11-06 07:57:09.756443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.156 [2024-11-06 07:57:09.756518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.156 [2024-11-06 07:57:09.756766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.156 [2024-11-06 07:57:09.756858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.156 [2024-11-06 07:57:09.756964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.156 [2024-11-06 07:57:09.757044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.156 [2024-11-06 07:57:09.757156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.156 [2024-11-06 07:57:09.757232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.156 [2024-11-06 07:57:09.757351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.156 [2024-11-06 07:57:09.757427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.877133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.415 [2024-11-06 07:57:09.877416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.415 [2024-11-06 07:57:09.877538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.415 [2024-11-06 07:57:09.877618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.969230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.415 [2024-11-06 07:57:09.969512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.415 [2024-11-06 07:57:09.969618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.415 [2024-11-06 07:57:09.969696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.969927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.415 [2024-11-06 07:57:09.970017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.415 [2024-11-06 07:57:09.970111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.415 [2024-11-06 07:57:09.970185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.970409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.415 [2024-11-06 07:57:09.970499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.415 [2024-11-06 07:57:09.970587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.415 [2024-11-06 07:57:09.970661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.970912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.415 [2024-11-06 07:57:09.971005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.415 [2024-11-06 07:57:09.971102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.415 [2024-11-06 07:57:09.971176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.971350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.415 [2024-11-06 07:57:09.971449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:47.415 [2024-11-06 07:57:09.971541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.415 [2024-11-06 07:57:09.971618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.971754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.415 [2024-11-06 07:57:09.971835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.415 [2024-11-06 07:57:09.971926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.415 [2024-11-06 07:57:09.972008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.972167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:47.415 [2024-11-06 07:57:09.972270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.415 [2024-11-06 07:57:09.972372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:47.415 [2024-11-06 07:57:09.972450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.415 [2024-11-06 07:57:09.972749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 505.101 ms, result 0 00:20:47.415 true 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74473 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 74473 ']' 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 74473 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74473 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:47.415 killing process with pid 74473 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74473' 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 74473 00:20:47.415 07:57:10 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 74473 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:52.687 07:57:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:52.687 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:52.687 fio-3.35 00:20:52.687 Starting 1 thread 00:20:59.243 00:20:59.243 test: (groupid=0, jobs=1): err= 0: pid=74702: Wed Nov 6 07:57:20 2024 00:20:59.243 read: IOPS=848, BW=56.3MiB/s (59.0MB/s)(255MiB/4520msec) 00:20:59.243 slat (usec): min=9, max=141, avg=13.22, stdev= 5.24 00:20:59.243 clat (usec): min=380, max=2715, avg=524.91, stdev=66.96 00:20:59.243 lat (usec): min=403, max=2738, avg=538.13, stdev=67.93 00:20:59.243 clat percentiles (usec): 00:20:59.243 | 1.00th=[ 412], 5.00th=[ 433], 10.00th=[ 445], 20.00th=[ 482], 00:20:59.243 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 523], 60.00th=[ 529], 00:20:59.243 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 603], 95.00th=[ 627], 00:20:59.243 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 766], 99.95th=[ 807], 00:20:59.243 | 99.99th=[ 2704] 00:20:59.243 write: IOPS=854, BW=56.7MiB/s (59.5MB/s)(256MiB/4515msec); 0 zone resets 00:20:59.243 slat (usec): min=23, max=134, avg=30.20, stdev= 7.61 00:20:59.243 clat (usec): min=419, max=1212, avg=592.03, stdev=69.89 00:20:59.243 lat (usec): min=445, max=1250, avg=622.23, stdev=70.49 00:20:59.243 clat percentiles (usec): 00:20:59.243 | 1.00th=[ 461], 5.00th=[ 502], 10.00th=[ 523], 20.00th=[ 545], 00:20:59.243 | 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 603], 00:20:59.243 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 660], 95.00th=[ 685], 00:20:59.243 | 99.00th=[ 873], 99.50th=[ 947], 99.90th=[ 1074], 99.95th=[ 1205], 00:20:59.243 | 99.99th=[ 1205] 00:20:59.243 bw ( KiB/s): min=56440, max=61472, per=100.00%, avg=58132.44, stdev=1506.95, samples=9 00:20:59.243 iops : min= 830, max= 904, avg=854.89, stdev=22.16, samples=9 00:20:59.243 lat (usec) : 500=16.40%, 750=82.52%, 1000=0.98% 00:20:59.243 lat (msec) : 2=0.09%, 4=0.01% 00:20:59.243 cpu : usr=98.89%, sys=0.15%, ctx=6, majf=0, minf=1169 00:20:59.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.243 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.244 00:20:59.244 Run status group 0 (all jobs): 00:20:59.244 READ: bw=56.3MiB/s (59.0MB/s), 56.3MiB/s-56.3MiB/s (59.0MB/s-59.0MB/s), io=255MiB (267MB), run=4520-4520msec 00:20:59.244 WRITE: bw=56.7MiB/s (59.5MB/s), 56.7MiB/s-56.7MiB/s (59.5MB/s-59.5MB/s), io=256MiB (269MB), run=4515-4515msec 00:21:00.618 ----------------------------------------------------- 00:21:00.618 Suppressions used: 00:21:00.618 count bytes template 00:21:00.618 1 5 /usr/src/fio/parse.c 00:21:00.618 1 8 libtcmalloc_minimal.so 00:21:00.618 1 904 libcrypto.so 00:21:00.618 ----------------------------------------------------- 00:21:00.618 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:00.618 07:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:00.618 07:57:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:00.877 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:00.877 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:00.877 fio-3.35 00:21:00.877 Starting 2 threads 00:21:39.626 00:21:39.626 first_half: (groupid=0, jobs=1): err= 0: pid=74823: Wed Nov 6 07:57:57 2024 00:21:39.626 read: IOPS=2001, BW=8007KiB/s (8200kB/s)(255MiB/32568msec) 00:21:39.626 slat (nsec): min=5198, max=79318, avg=9419.43, stdev=3209.22 00:21:39.626 clat (usec): min=881, max=391023, avg=46347.12, stdev=22658.95 00:21:39.626 lat (usec): min=893, max=391034, avg=46356.54, stdev=22659.20 00:21:39.626 clat percentiles (msec): 00:21:39.626 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:21:39.626 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 44], 00:21:39.626 | 70.00th=[ 45], 80.00th=[ 47], 90.00th=[ 50], 95.00th=[ 55], 00:21:39.626 | 99.00th=[ 180], 99.50th=[ 205], 99.90th=[ 305], 99.95th=[ 338], 00:21:39.626 | 99.99th=[ 380] 00:21:39.626 write: IOPS=2954, BW=11.5MiB/s (12.1MB/s)(256MiB/22181msec); 0 zone resets 00:21:39.626 slat (usec): min=6, max=623, avg=12.01, stdev= 7.34 00:21:39.626 clat (usec): min=511, max=205564, avg=17469.66, stdev=31826.28 00:21:39.626 lat (usec): min=524, max=205575, avg=17481.67, stdev=31826.56 00:21:39.626 clat percentiles (usec): 00:21:39.626 | 1.00th=[ 1139], 5.00th=[ 1434], 10.00th=[ 1631], 20.00th=[ 1975], 00:21:39.626 | 30.00th=[ 2311], 40.00th=[ 2966], 50.00th=[ 5080], 60.00th=[ 6980], 00:21:39.626 | 70.00th=[ 9110], 80.00th=[ 17695], 90.00th=[ 87557], 95.00th=[103285], 00:21:39.626 | 99.00th=[125305], 99.50th=[132645], 99.90th=[198181], 99.95th=[200279], 00:21:39.626 | 99.99th=[204473] 00:21:39.626 bw ( KiB/s): min= 3552, max=43392, per=100.00%, avg=20164.92, stdev=9879.67, samples=26 00:21:39.626 iops : min= 888, max=10848, avg=5041.23, stdev=2469.92, samples=26 00:21:39.626 lat (usec) : 750=0.02%, 1000=0.14% 00:21:39.626 lat (msec) : 2=10.34%, 4=12.58%, 10=13.48%, 20=6.82%, 50=46.27% 00:21:39.626 lat (msec) : 100=6.28%, 250=3.97%, 500=0.10% 00:21:39.626 cpu : usr=99.08%, sys=0.19%, ctx=51, majf=0, minf=5555 00:21:39.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:39.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.626 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:39.626 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:39.626 second_half: (groupid=0, jobs=1): err= 0: pid=74824: Wed Nov 6 07:57:57 2024 00:21:39.626 read: IOPS=1987, BW=7949KiB/s (8140kB/s)(255MiB/32815msec) 00:21:39.626 slat (nsec): min=5336, max=65978, avg=10278.58, stdev=3162.04 00:21:39.626 clat (usec): min=1086, max=395756, avg=45255.33, stdev=21034.49 00:21:39.626 lat (usec): min=1100, max=395769, avg=45265.61, stdev=21034.52 00:21:39.626 clat percentiles (msec): 00:21:39.626 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 41], 00:21:39.626 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 44], 00:21:39.626 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 50], 95.00th=[ 53], 00:21:39.626 | 99.00th=[ 167], 99.50th=[ 199], 99.90th=[ 262], 99.95th=[ 284], 00:21:39.626 | 99.99th=[ 388] 00:21:39.626 write: IOPS=2459, BW=9837KiB/s (10.1MB/s)(256MiB/26649msec); 0 zone resets 00:21:39.626 slat (usec): min=6, max=348, avg=11.86, stdev= 5.83 00:21:39.626 clat (usec): min=522, max=206509, avg=19025.24, stdev=32498.35 00:21:39.626 lat (usec): min=532, max=206519, avg=19037.10, stdev=32498.79 00:21:39.626 clat percentiles (usec): 00:21:39.626 | 1.00th=[ 1045], 5.00th=[ 1336], 10.00th=[ 1549], 20.00th=[ 1893], 00:21:39.626 | 30.00th=[ 2278], 40.00th=[ 3818], 50.00th=[ 5866], 60.00th=[ 7701], 00:21:39.626 | 70.00th=[ 13042], 80.00th=[ 19268], 90.00th=[ 88605], 95.00th=[104334], 00:21:39.626 | 99.00th=[129500], 99.50th=[135267], 99.90th=[200279], 99.95th=[202376], 00:21:39.626 | 99.99th=[204473] 00:21:39.626 bw ( KiB/s): min= 904, max=41096, per=88.83%, avg=17476.27, stdev=10862.16, samples=30 00:21:39.626 iops : min= 226, max=10274, avg=4369.07, stdev=2715.54, samples=30 00:21:39.626 lat (usec) : 750=0.02%, 1000=0.32% 00:21:39.626 lat (msec) : 2=11.42%, 4=9.04%, 10=13.42%, 20=8.80%, 50=46.72% 00:21:39.626 lat (msec) : 100=6.07%, 250=4.12%, 500=0.06% 00:21:39.626 cpu : usr=99.11%, sys=0.20%, ctx=44, majf=0, minf=5546 00:21:39.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:39.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.626 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:39.626 issued rwts: total=65212,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:39.626 00:21:39.626 Run status group 0 (all jobs): 00:21:39.626 READ: bw=15.5MiB/s (16.3MB/s), 7949KiB/s-8007KiB/s (8140kB/s-8200kB/s), io=509MiB (534MB), run=32568-32815msec 00:21:39.626 WRITE: bw=19.2MiB/s (20.1MB/s), 9837KiB/s-11.5MiB/s (10.1MB/s-12.1MB/s), io=512MiB (537MB), run=22181-26649msec 00:21:39.626 ----------------------------------------------------- 00:21:39.626 Suppressions used: 00:21:39.626 count bytes template 00:21:39.626 2 10 /usr/src/fio/parse.c 00:21:39.626 2 192 /usr/src/fio/iolog.c 00:21:39.626 1 8 libtcmalloc_minimal.so 00:21:39.626 1 904 libcrypto.so 00:21:39.626 ----------------------------------------------------- 00:21:39.626 00:21:39.626 07:58:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:39.626 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.626 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:39.626 07:58:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:39.626 07:58:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:39.626 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.626 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:39.627 07:58:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:39.627 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:39.627 fio-3.35 00:21:39.627 Starting 1 thread 00:21:57.709 00:21:57.709 test: (groupid=0, jobs=1): err= 0: pid=75222: Wed Nov 6 07:58:18 2024 00:21:57.709 read: IOPS=5866, BW=22.9MiB/s (24.0MB/s)(255MiB/11114msec) 00:21:57.709 slat (usec): min=5, max=575, avg= 9.28, stdev= 5.00 00:21:57.709 clat (usec): min=932, max=42272, avg=21804.25, stdev=1366.26 00:21:57.709 lat (usec): min=938, max=42283, avg=21813.53, stdev=1366.21 00:21:57.709 clat percentiles (usec): 00:21:57.709 | 1.00th=[20317], 5.00th=[21103], 10.00th=[21103], 20.00th=[21365], 00:21:57.709 | 30.00th=[21365], 40.00th=[21627], 50.00th=[21627], 60.00th=[21627], 00:21:57.709 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[22676], 00:21:57.709 | 99.00th=[26870], 99.50th=[32375], 99.90th=[34341], 99.95th=[37487], 00:21:57.709 | 99.99th=[41681] 00:21:57.709 write: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(256MiB/5765msec); 0 zone resets 00:21:57.709 slat (usec): min=6, max=1469, avg=10.43, stdev=10.15 00:21:57.709 clat (usec): min=685, max=68535, avg=11198.95, stdev=13811.63 00:21:57.709 lat (usec): min=694, max=68545, avg=11209.38, stdev=13811.67 00:21:57.709 clat percentiles (usec): 00:21:57.709 | 1.00th=[ 947], 5.00th=[ 1139], 10.00th=[ 1270], 20.00th=[ 1450], 00:21:57.709 | 30.00th=[ 1631], 40.00th=[ 2057], 50.00th=[ 7308], 60.00th=[ 8586], 00:21:57.709 | 70.00th=[10159], 80.00th=[12911], 90.00th=[40633], 95.00th=[43254], 00:21:57.709 | 99.00th=[46400], 99.50th=[46924], 99.90th=[51643], 99.95th=[56361], 00:21:57.709 | 99.99th=[66323] 00:21:57.709 bw ( KiB/s): min=19376, max=68128, per=96.07%, avg=43683.67, stdev=11506.73, samples=12 00:21:57.709 iops : min= 4844, max=17032, avg=10920.92, stdev=2876.68, samples=12 00:21:57.709 lat (usec) : 750=0.01%, 1000=0.84% 00:21:57.709 lat (msec) : 2=18.99%, 4=1.15%, 10=13.63%, 20=7.55%, 50=57.75% 00:21:57.709 lat (msec) : 100=0.07% 00:21:57.709 cpu : usr=98.28%, sys=0.47%, ctx=57, majf=0, minf=5565 00:21:57.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:57.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.709 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:57.709 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:57.709 00:21:57.709 Run status group 0 (all jobs): 00:21:57.709 READ: bw=22.9MiB/s (24.0MB/s), 22.9MiB/s-22.9MiB/s (24.0MB/s-24.0MB/s), io=255MiB (267MB), run=11114-11114msec 00:21:57.709 WRITE: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=256MiB (268MB), run=5765-5765msec 00:21:58.276 ----------------------------------------------------- 00:21:58.276 Suppressions used: 00:21:58.276 count bytes template 00:21:58.276 1 5 /usr/src/fio/parse.c 00:21:58.276 2 192 /usr/src/fio/iolog.c 00:21:58.276 1 8 libtcmalloc_minimal.so 00:21:58.276 1 904 libcrypto.so 00:21:58.276 ----------------------------------------------------- 00:21:58.276 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:58.276 Remove shared memory files 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58040 /dev/shm/spdk_tgt_trace.pid73381 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:58.276 00:21:58.276 real 1m21.877s 00:21:58.276 user 3m4.087s 00:21:58.276 sys 0m4.700s 00:21:58.276 07:58:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.277 07:58:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:58.277 ************************************ 00:21:58.277 END TEST ftl_fio_basic 00:21:58.277 ************************************ 00:21:58.277 07:58:20 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:58.277 07:58:20 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:58.277 07:58:20 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.277 07:58:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:58.277 ************************************ 00:21:58.277 START TEST ftl_bdevperf 00:21:58.277 ************************************ 00:21:58.277 07:58:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:58.277 * Looking for test storage... 00:21:58.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # lcov --version 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.535 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.536 07:58:20 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:21:58.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.536 --rc genhtml_branch_coverage=1 00:21:58.536 --rc genhtml_function_coverage=1 00:21:58.536 --rc genhtml_legend=1 00:21:58.536 --rc geninfo_all_blocks=1 00:21:58.536 --rc geninfo_unexecuted_blocks=1 00:21:58.536 00:21:58.536 ' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:21:58.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.536 --rc genhtml_branch_coverage=1 00:21:58.536 --rc genhtml_function_coverage=1 00:21:58.536 --rc genhtml_legend=1 00:21:58.536 --rc geninfo_all_blocks=1 00:21:58.536 --rc geninfo_unexecuted_blocks=1 00:21:58.536 00:21:58.536 ' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:21:58.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.536 --rc genhtml_branch_coverage=1 00:21:58.536 --rc genhtml_function_coverage=1 00:21:58.536 --rc genhtml_legend=1 00:21:58.536 --rc geninfo_all_blocks=1 00:21:58.536 --rc geninfo_unexecuted_blocks=1 00:21:58.536 00:21:58.536 ' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:21:58.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.536 --rc genhtml_branch_coverage=1 00:21:58.536 --rc genhtml_function_coverage=1 00:21:58.536 --rc genhtml_legend=1 00:21:58.536 --rc geninfo_all_blocks=1 00:21:58.536 --rc geninfo_unexecuted_blocks=1 00:21:58.536 00:21:58.536 ' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75493 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75493 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 75493 ']' 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.536 07:58:21 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:58.536 [2024-11-06 07:58:21.131410] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:21:58.536 [2024-11-06 07:58:21.131587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75493 ] 00:21:58.795 [2024-11-06 07:58:21.324480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.054 [2024-11-06 07:58:21.481340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.621 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.621 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:21:59.621 07:58:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:59.621 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:59.621 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:59.621 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:59.621 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:59.621 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:59.880 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:59.880 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:59.880 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:59.880 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:59.880 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:59.880 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:21:59.880 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:21:59.880 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:00.446 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:00.446 { 00:22:00.446 "name": "nvme0n1", 00:22:00.446 "aliases": [ 00:22:00.446 "b33d17ef-d000-4b0d-9076-493e69421930" 00:22:00.446 ], 00:22:00.446 "product_name": "NVMe disk", 00:22:00.446 "block_size": 4096, 00:22:00.446 "num_blocks": 1310720, 00:22:00.446 "uuid": "b33d17ef-d000-4b0d-9076-493e69421930", 00:22:00.446 "numa_id": -1, 00:22:00.446 "assigned_rate_limits": { 00:22:00.446 "rw_ios_per_sec": 0, 00:22:00.446 "rw_mbytes_per_sec": 0, 00:22:00.446 "r_mbytes_per_sec": 0, 00:22:00.446 "w_mbytes_per_sec": 0 00:22:00.446 }, 00:22:00.446 "claimed": true, 00:22:00.446 "claim_type": "read_many_write_one", 00:22:00.446 "zoned": false, 00:22:00.446 "supported_io_types": { 00:22:00.446 "read": true, 00:22:00.446 "write": true, 00:22:00.446 "unmap": true, 00:22:00.446 "flush": true, 00:22:00.446 "reset": true, 00:22:00.447 "nvme_admin": true, 00:22:00.447 "nvme_io": true, 00:22:00.447 "nvme_io_md": false, 00:22:00.447 "write_zeroes": true, 00:22:00.447 "zcopy": false, 00:22:00.447 "get_zone_info": false, 00:22:00.447 "zone_management": false, 00:22:00.447 "zone_append": false, 00:22:00.447 "compare": true, 00:22:00.447 "compare_and_write": false, 00:22:00.447 "abort": true, 00:22:00.447 "seek_hole": false, 00:22:00.447 "seek_data": false, 00:22:00.447 "copy": true, 00:22:00.447 "nvme_iov_md": false 00:22:00.447 }, 00:22:00.447 "driver_specific": { 00:22:00.447 "nvme": [ 00:22:00.447 { 00:22:00.447 "pci_address": "0000:00:11.0", 00:22:00.447 "trid": { 00:22:00.447 "trtype": "PCIe", 00:22:00.447 "traddr": "0000:00:11.0" 00:22:00.447 }, 00:22:00.447 "ctrlr_data": { 00:22:00.447 "cntlid": 0, 00:22:00.447 "vendor_id": "0x1b36", 00:22:00.447 "model_number": "QEMU NVMe Ctrl", 00:22:00.447 "serial_number": "12341", 00:22:00.447 "firmware_revision": "8.0.0", 00:22:00.447 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:00.447 "oacs": { 00:22:00.447 "security": 0, 00:22:00.447 "format": 1, 00:22:00.447 "firmware": 0, 00:22:00.447 "ns_manage": 1 00:22:00.447 }, 00:22:00.447 "multi_ctrlr": false, 00:22:00.447 "ana_reporting": false 00:22:00.447 }, 00:22:00.447 "vs": { 00:22:00.447 "nvme_version": "1.4" 00:22:00.447 }, 00:22:00.447 "ns_data": { 00:22:00.447 "id": 1, 00:22:00.447 "can_share": false 00:22:00.447 } 00:22:00.447 } 00:22:00.447 ], 00:22:00.447 "mp_policy": "active_passive" 00:22:00.447 } 00:22:00.447 } 00:22:00.447 ]' 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:00.447 07:58:22 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:00.706 07:58:23 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ba0ad3af-6f84-48a7-876b-31ab1bf8e75b 00:22:00.706 07:58:23 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:00.706 07:58:23 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba0ad3af-6f84-48a7-876b-31ab1bf8e75b 00:22:00.964 07:58:23 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:01.531 07:58:23 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=fb2561e6-8466-4e9a-bea8-9e6126916020 00:22:01.532 07:58:23 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fb2561e6-8466-4e9a-bea8-9e6126916020 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:01.790 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:02.049 { 00:22:02.049 "name": "77ec3a87-8c5c-4fa4-aed4-80db1746b2c5", 00:22:02.049 "aliases": [ 00:22:02.049 "lvs/nvme0n1p0" 00:22:02.049 ], 00:22:02.049 "product_name": "Logical Volume", 00:22:02.049 "block_size": 4096, 00:22:02.049 "num_blocks": 26476544, 00:22:02.049 "uuid": "77ec3a87-8c5c-4fa4-aed4-80db1746b2c5", 00:22:02.049 "assigned_rate_limits": { 00:22:02.049 "rw_ios_per_sec": 0, 00:22:02.049 "rw_mbytes_per_sec": 0, 00:22:02.049 "r_mbytes_per_sec": 0, 00:22:02.049 "w_mbytes_per_sec": 0 00:22:02.049 }, 00:22:02.049 "claimed": false, 00:22:02.049 "zoned": false, 00:22:02.049 "supported_io_types": { 00:22:02.049 "read": true, 00:22:02.049 "write": true, 00:22:02.049 "unmap": true, 00:22:02.049 "flush": false, 00:22:02.049 "reset": true, 00:22:02.049 "nvme_admin": false, 00:22:02.049 "nvme_io": false, 00:22:02.049 "nvme_io_md": false, 00:22:02.049 "write_zeroes": true, 00:22:02.049 "zcopy": false, 00:22:02.049 "get_zone_info": false, 00:22:02.049 "zone_management": false, 00:22:02.049 "zone_append": false, 00:22:02.049 "compare": false, 00:22:02.049 "compare_and_write": false, 00:22:02.049 "abort": false, 00:22:02.049 "seek_hole": true, 00:22:02.049 "seek_data": true, 00:22:02.049 "copy": false, 00:22:02.049 "nvme_iov_md": false 00:22:02.049 }, 00:22:02.049 "driver_specific": { 00:22:02.049 "lvol": { 00:22:02.049 "lvol_store_uuid": "fb2561e6-8466-4e9a-bea8-9e6126916020", 00:22:02.049 "base_bdev": "nvme0n1", 00:22:02.049 "thin_provision": true, 00:22:02.049 "num_allocated_clusters": 0, 00:22:02.049 "snapshot": false, 00:22:02.049 "clone": false, 00:22:02.049 "esnap_clone": false 00:22:02.049 } 00:22:02.049 } 00:22:02.049 } 00:22:02.049 ]' 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:02.049 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:02.308 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:02.308 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:02.308 07:58:24 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:02.308 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:02.308 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:02.308 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:02.308 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:02.308 07:58:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:02.876 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:02.876 { 00:22:02.876 "name": "77ec3a87-8c5c-4fa4-aed4-80db1746b2c5", 00:22:02.876 "aliases": [ 00:22:02.876 "lvs/nvme0n1p0" 00:22:02.876 ], 00:22:02.876 "product_name": "Logical Volume", 00:22:02.876 "block_size": 4096, 00:22:02.876 "num_blocks": 26476544, 00:22:02.876 "uuid": "77ec3a87-8c5c-4fa4-aed4-80db1746b2c5", 00:22:02.876 "assigned_rate_limits": { 00:22:02.876 "rw_ios_per_sec": 0, 00:22:02.876 "rw_mbytes_per_sec": 0, 00:22:02.876 "r_mbytes_per_sec": 0, 00:22:02.876 "w_mbytes_per_sec": 0 00:22:02.876 }, 00:22:02.876 "claimed": false, 00:22:02.876 "zoned": false, 00:22:02.876 "supported_io_types": { 00:22:02.876 "read": true, 00:22:02.876 "write": true, 00:22:02.876 "unmap": true, 00:22:02.876 "flush": false, 00:22:02.876 "reset": true, 00:22:02.876 "nvme_admin": false, 00:22:02.876 "nvme_io": false, 00:22:02.876 "nvme_io_md": false, 00:22:02.876 "write_zeroes": true, 00:22:02.876 "zcopy": false, 00:22:02.876 "get_zone_info": false, 00:22:02.876 "zone_management": false, 00:22:02.876 "zone_append": false, 00:22:02.876 "compare": false, 00:22:02.876 "compare_and_write": false, 00:22:02.876 "abort": false, 00:22:02.876 "seek_hole": true, 00:22:02.876 "seek_data": true, 00:22:02.876 "copy": false, 00:22:02.876 "nvme_iov_md": false 00:22:02.876 }, 00:22:02.876 "driver_specific": { 00:22:02.876 "lvol": { 00:22:02.876 "lvol_store_uuid": "fb2561e6-8466-4e9a-bea8-9e6126916020", 00:22:02.876 "base_bdev": "nvme0n1", 00:22:02.876 "thin_provision": true, 00:22:02.876 "num_allocated_clusters": 0, 00:22:02.876 "snapshot": false, 00:22:02.876 "clone": false, 00:22:02.876 "esnap_clone": false 00:22:02.876 } 00:22:02.876 } 00:22:02.876 } 00:22:02.876 ]' 00:22:02.876 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:02.876 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:02.876 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:02.876 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:02.876 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:02.876 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:02.876 07:58:25 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:02.877 07:58:25 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:03.135 07:58:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:03.135 07:58:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:03.135 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:03.135 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:03.135 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:22:03.135 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:22:03.135 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:03.394 { 00:22:03.394 "name": "77ec3a87-8c5c-4fa4-aed4-80db1746b2c5", 00:22:03.394 "aliases": [ 00:22:03.394 "lvs/nvme0n1p0" 00:22:03.394 ], 00:22:03.394 "product_name": "Logical Volume", 00:22:03.394 "block_size": 4096, 00:22:03.394 "num_blocks": 26476544, 00:22:03.394 "uuid": "77ec3a87-8c5c-4fa4-aed4-80db1746b2c5", 00:22:03.394 "assigned_rate_limits": { 00:22:03.394 "rw_ios_per_sec": 0, 00:22:03.394 "rw_mbytes_per_sec": 0, 00:22:03.394 "r_mbytes_per_sec": 0, 00:22:03.394 "w_mbytes_per_sec": 0 00:22:03.394 }, 00:22:03.394 "claimed": false, 00:22:03.394 "zoned": false, 00:22:03.394 "supported_io_types": { 00:22:03.394 "read": true, 00:22:03.394 "write": true, 00:22:03.394 "unmap": true, 00:22:03.394 "flush": false, 00:22:03.394 "reset": true, 00:22:03.394 "nvme_admin": false, 00:22:03.394 "nvme_io": false, 00:22:03.394 "nvme_io_md": false, 00:22:03.394 "write_zeroes": true, 00:22:03.394 "zcopy": false, 00:22:03.394 "get_zone_info": false, 00:22:03.394 "zone_management": false, 00:22:03.394 "zone_append": false, 00:22:03.394 "compare": false, 00:22:03.394 "compare_and_write": false, 00:22:03.394 "abort": false, 00:22:03.394 "seek_hole": true, 00:22:03.394 "seek_data": true, 00:22:03.394 "copy": false, 00:22:03.394 "nvme_iov_md": false 00:22:03.394 }, 00:22:03.394 "driver_specific": { 00:22:03.394 "lvol": { 00:22:03.394 "lvol_store_uuid": "fb2561e6-8466-4e9a-bea8-9e6126916020", 00:22:03.394 "base_bdev": "nvme0n1", 00:22:03.394 "thin_provision": true, 00:22:03.394 "num_allocated_clusters": 0, 00:22:03.394 "snapshot": false, 00:22:03.394 "clone": false, 00:22:03.394 "esnap_clone": false 00:22:03.394 } 00:22:03.394 } 00:22:03.394 } 00:22:03.394 ]' 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:03.394 07:58:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 77ec3a87-8c5c-4fa4-aed4-80db1746b2c5 -c nvc0n1p0 --l2p_dram_limit 20 00:22:03.653 [2024-11-06 07:58:26.230370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.230443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:03.653 [2024-11-06 07:58:26.230466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:03.653 [2024-11-06 07:58:26.230485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.230583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.230605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.653 [2024-11-06 07:58:26.230619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:03.653 [2024-11-06 07:58:26.230637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.230665] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:03.653 [2024-11-06 07:58:26.231726] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:03.653 [2024-11-06 07:58:26.231755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.231775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.653 [2024-11-06 07:58:26.231788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:22:03.653 [2024-11-06 07:58:26.231803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.231948] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e1cbe1cd-d174-45b0-82db-b1749efbe852 00:22:03.653 [2024-11-06 07:58:26.233805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.233847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:03.653 [2024-11-06 07:58:26.233866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:03.653 [2024-11-06 07:58:26.233882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.243541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.243604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.653 [2024-11-06 07:58:26.243627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.578 ms 00:22:03.653 [2024-11-06 07:58:26.243640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.243804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.243830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.653 [2024-11-06 07:58:26.243852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:03.653 [2024-11-06 07:58:26.243865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.243965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.243984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:03.653 [2024-11-06 07:58:26.244001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:03.653 [2024-11-06 07:58:26.244013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.244061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:03.653 [2024-11-06 07:58:26.249378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.249419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.653 [2024-11-06 07:58:26.249436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.331 ms 00:22:03.653 [2024-11-06 07:58:26.249453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.249499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.249522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:03.653 [2024-11-06 07:58:26.249536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:03.653 [2024-11-06 07:58:26.249550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.249601] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:03.653 [2024-11-06 07:58:26.249772] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:03.653 [2024-11-06 07:58:26.249797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:03.653 [2024-11-06 07:58:26.249817] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:03.653 [2024-11-06 07:58:26.249833] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:03.653 [2024-11-06 07:58:26.249850] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:03.653 [2024-11-06 07:58:26.249863] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:03.653 [2024-11-06 07:58:26.249877] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:03.653 [2024-11-06 07:58:26.249888] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:03.653 [2024-11-06 07:58:26.249902] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:03.653 [2024-11-06 07:58:26.249915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.249930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:03.653 [2024-11-06 07:58:26.249942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:22:03.653 [2024-11-06 07:58:26.249959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.250051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.653 [2024-11-06 07:58:26.250072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:03.653 [2024-11-06 07:58:26.250085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:03.653 [2024-11-06 07:58:26.250102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.653 [2024-11-06 07:58:26.250205] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:03.653 [2024-11-06 07:58:26.250234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:03.653 [2024-11-06 07:58:26.250262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.653 [2024-11-06 07:58:26.250281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.653 [2024-11-06 07:58:26.250298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:03.653 [2024-11-06 07:58:26.250311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:03.653 [2024-11-06 07:58:26.250323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:03.653 [2024-11-06 07:58:26.250337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:03.653 [2024-11-06 07:58:26.250351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:03.653 [2024-11-06 07:58:26.250365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.653 [2024-11-06 07:58:26.250376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:03.653 [2024-11-06 07:58:26.250390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:03.653 [2024-11-06 07:58:26.250400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.653 [2024-11-06 07:58:26.250429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:03.653 [2024-11-06 07:58:26.250441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:03.653 [2024-11-06 07:58:26.250457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.653 [2024-11-06 07:58:26.250468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:03.653 [2024-11-06 07:58:26.250486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:03.653 [2024-11-06 07:58:26.250497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.653 [2024-11-06 07:58:26.250512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:03.653 [2024-11-06 07:58:26.250523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:03.653 [2024-11-06 07:58:26.250536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.653 [2024-11-06 07:58:26.250547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:03.653 [2024-11-06 07:58:26.250560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:03.653 [2024-11-06 07:58:26.250571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.653 [2024-11-06 07:58:26.250585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:03.653 [2024-11-06 07:58:26.250595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:03.653 [2024-11-06 07:58:26.250608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.653 [2024-11-06 07:58:26.250619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:03.654 [2024-11-06 07:58:26.250632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:03.654 [2024-11-06 07:58:26.250643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.654 [2024-11-06 07:58:26.250659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:03.654 [2024-11-06 07:58:26.250670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:03.654 [2024-11-06 07:58:26.250683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.654 [2024-11-06 07:58:26.250694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:03.654 [2024-11-06 07:58:26.250707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:03.654 [2024-11-06 07:58:26.250718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.654 [2024-11-06 07:58:26.250732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:03.654 [2024-11-06 07:58:26.250743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:03.654 [2024-11-06 07:58:26.250756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.654 [2024-11-06 07:58:26.250769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:03.654 [2024-11-06 07:58:26.250783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:03.654 [2024-11-06 07:58:26.250793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.654 [2024-11-06 07:58:26.250806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:03.654 [2024-11-06 07:58:26.250819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:03.654 [2024-11-06 07:58:26.250833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.654 [2024-11-06 07:58:26.250844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.654 [2024-11-06 07:58:26.250863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:03.654 [2024-11-06 07:58:26.250874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:03.654 [2024-11-06 07:58:26.250888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:03.654 [2024-11-06 07:58:26.250899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:03.654 [2024-11-06 07:58:26.250913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:03.654 [2024-11-06 07:58:26.250924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:03.654 [2024-11-06 07:58:26.250943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:03.654 [2024-11-06 07:58:26.250959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.654 [2024-11-06 07:58:26.250975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:03.654 [2024-11-06 07:58:26.250987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:03.654 [2024-11-06 07:58:26.251001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:03.654 [2024-11-06 07:58:26.251013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:03.654 [2024-11-06 07:58:26.251027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:03.654 [2024-11-06 07:58:26.251039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:03.654 [2024-11-06 07:58:26.251061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:03.654 [2024-11-06 07:58:26.251073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:03.654 [2024-11-06 07:58:26.251089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:03.654 [2024-11-06 07:58:26.251101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:03.654 [2024-11-06 07:58:26.251116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:03.654 [2024-11-06 07:58:26.251127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:03.654 [2024-11-06 07:58:26.251143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:03.654 [2024-11-06 07:58:26.251155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:03.654 [2024-11-06 07:58:26.251169] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:03.654 [2024-11-06 07:58:26.251182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.654 [2024-11-06 07:58:26.251200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:03.654 [2024-11-06 07:58:26.251213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:03.654 [2024-11-06 07:58:26.251227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:03.654 [2024-11-06 07:58:26.251239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:03.654 [2024-11-06 07:58:26.251269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.654 [2024-11-06 07:58:26.251283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:03.654 [2024-11-06 07:58:26.251303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.127 ms 00:22:03.654 [2024-11-06 07:58:26.251314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.654 [2024-11-06 07:58:26.251370] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:03.654 [2024-11-06 07:58:26.251388] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:06.939 [2024-11-06 07:58:29.392892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.392978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:06.939 [2024-11-06 07:58:29.393015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3141.520 ms 00:22:06.939 [2024-11-06 07:58:29.393041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.939 [2024-11-06 07:58:29.433739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.433814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:06.939 [2024-11-06 07:58:29.433841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.358 ms 00:22:06.939 [2024-11-06 07:58:29.433855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.939 [2024-11-06 07:58:29.434058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.434080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:06.939 [2024-11-06 07:58:29.434102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:06.939 [2024-11-06 07:58:29.434114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.939 [2024-11-06 07:58:29.493803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.493867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:06.939 [2024-11-06 07:58:29.493895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.628 ms 00:22:06.939 [2024-11-06 07:58:29.493908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.939 [2024-11-06 07:58:29.493981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.493998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:06.939 [2024-11-06 07:58:29.494015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:06.939 [2024-11-06 07:58:29.494032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.939 [2024-11-06 07:58:29.495432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.495470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:06.939 [2024-11-06 07:58:29.495489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:22:06.939 [2024-11-06 07:58:29.495503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.939 [2024-11-06 07:58:29.495675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.495695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:06.939 [2024-11-06 07:58:29.495714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:22:06.939 [2024-11-06 07:58:29.495727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.939 [2024-11-06 07:58:29.515234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.515311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:06.939 [2024-11-06 07:58:29.515336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.476 ms 00:22:06.939 [2024-11-06 07:58:29.515349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:06.939 [2024-11-06 07:58:29.532908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:06.939 [2024-11-06 07:58:29.540736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:06.939 [2024-11-06 07:58:29.540811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:06.939 [2024-11-06 07:58:29.540834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.222 ms 00:22:06.939 [2024-11-06 07:58:29.540850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.197 [2024-11-06 07:58:29.620260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.197 [2024-11-06 07:58:29.620354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:07.197 [2024-11-06 07:58:29.620377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.332 ms 00:22:07.197 [2024-11-06 07:58:29.620393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.197 [2024-11-06 07:58:29.620680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.197 [2024-11-06 07:58:29.620710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.197 [2024-11-06 07:58:29.620726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:22:07.197 [2024-11-06 07:58:29.620740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.197 [2024-11-06 07:58:29.656013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.197 [2024-11-06 07:58:29.656098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:07.197 [2024-11-06 07:58:29.656121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.170 ms 00:22:07.198 [2024-11-06 07:58:29.656137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.198 [2024-11-06 07:58:29.690049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.198 [2024-11-06 07:58:29.690141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:07.198 [2024-11-06 07:58:29.690164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.827 ms 00:22:07.198 [2024-11-06 07:58:29.690179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.198 [2024-11-06 07:58:29.691104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.198 [2024-11-06 07:58:29.691141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.198 [2024-11-06 07:58:29.691158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:22:07.198 [2024-11-06 07:58:29.691173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.198 [2024-11-06 07:58:29.791980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.198 [2024-11-06 07:58:29.792082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:07.198 [2024-11-06 07:58:29.792104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.680 ms 00:22:07.198 [2024-11-06 07:58:29.792120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.456 [2024-11-06 07:58:29.828996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.456 [2024-11-06 07:58:29.829093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:07.456 [2024-11-06 07:58:29.829117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.660 ms 00:22:07.456 [2024-11-06 07:58:29.829133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.456 [2024-11-06 07:58:29.865357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.456 [2024-11-06 07:58:29.865457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:07.456 [2024-11-06 07:58:29.865479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.108 ms 00:22:07.456 [2024-11-06 07:58:29.865496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.456 [2024-11-06 07:58:29.900151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.456 [2024-11-06 07:58:29.900245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:07.456 [2024-11-06 07:58:29.900293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.562 ms 00:22:07.456 [2024-11-06 07:58:29.900311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.456 [2024-11-06 07:58:29.900430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.456 [2024-11-06 07:58:29.900457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.456 [2024-11-06 07:58:29.900471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:07.456 [2024-11-06 07:58:29.900486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.456 [2024-11-06 07:58:29.900652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.456 [2024-11-06 07:58:29.900676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:07.456 [2024-11-06 07:58:29.900689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:07.456 [2024-11-06 07:58:29.900704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.456 [2024-11-06 07:58:29.902109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3671.202 ms, result 0 00:22:07.456 { 00:22:07.456 "name": "ftl0", 00:22:07.456 "uuid": "e1cbe1cd-d174-45b0-82db-b1749efbe852" 00:22:07.456 } 00:22:07.456 07:58:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:07.456 07:58:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:22:07.456 07:58:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:22:07.714 07:58:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:07.972 [2024-11-06 07:58:30.366426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:07.972 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:07.972 Zero copy mechanism will not be used. 00:22:07.972 Running I/O for 4 seconds... 00:22:09.840 1602.00 IOPS, 106.38 MiB/s [2024-11-06T07:58:33.402Z] 1629.50 IOPS, 108.21 MiB/s [2024-11-06T07:58:34.779Z] 1660.00 IOPS, 110.23 MiB/s [2024-11-06T07:58:34.779Z] 1665.50 IOPS, 110.60 MiB/s 00:22:12.150 Latency(us) 00:22:12.150 [2024-11-06T07:58:34.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.150 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:12.150 ftl0 : 4.00 1664.90 110.56 0.00 0.00 628.28 255.07 2517.18 00:22:12.150 [2024-11-06T07:58:34.779Z] =================================================================================================================== 00:22:12.151 [2024-11-06T07:58:34.780Z] Total : 1664.90 110.56 0.00 0.00 628.28 255.07 2517.18 00:22:12.151 [2024-11-06 07:58:34.379416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:12.151 { 00:22:12.151 "results": [ 00:22:12.151 { 00:22:12.151 "job": "ftl0", 00:22:12.151 "core_mask": "0x1", 00:22:12.151 "workload": "randwrite", 00:22:12.151 "status": "finished", 00:22:12.151 "queue_depth": 1, 00:22:12.151 "io_size": 69632, 00:22:12.151 "runtime": 4.002042, 00:22:12.151 "iops": 1664.9000685150231, 00:22:12.151 "mibps": 110.55977017482576, 00:22:12.151 "io_failed": 0, 00:22:12.151 "io_timeout": 0, 00:22:12.151 "avg_latency_us": 628.2791098740672, 00:22:12.151 "min_latency_us": 255.0690909090909, 00:22:12.151 "max_latency_us": 2517.1781818181817 00:22:12.151 } 00:22:12.151 ], 00:22:12.151 "core_count": 1 00:22:12.151 } 00:22:12.151 07:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:12.151 [2024-11-06 07:58:34.547113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:12.151 Running I/O for 4 seconds... 00:22:14.020 8316.00 IOPS, 32.48 MiB/s [2024-11-06T07:58:37.587Z] 7798.50 IOPS, 30.46 MiB/s [2024-11-06T07:58:38.963Z] 7543.33 IOPS, 29.47 MiB/s [2024-11-06T07:58:38.963Z] 7448.75 IOPS, 29.10 MiB/s 00:22:16.334 Latency(us) 00:22:16.334 [2024-11-06T07:58:38.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.334 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:22:16.334 ftl0 : 4.02 7435.20 29.04 0.00 0.00 17163.19 323.96 36223.53 00:22:16.334 [2024-11-06T07:58:38.963Z] =================================================================================================================== 00:22:16.334 [2024-11-06T07:58:38.963Z] Total : 7435.20 29.04 0.00 0.00 17163.19 0.00 36223.53 00:22:16.334 { 00:22:16.334 "results": [ 00:22:16.334 { 00:22:16.334 "job": "ftl0", 00:22:16.334 "core_mask": "0x1", 00:22:16.334 "workload": "randwrite", 00:22:16.334 "status": "finished", 00:22:16.334 "queue_depth": 128, 00:22:16.334 "io_size": 4096, 00:22:16.334 "runtime": 4.024506, 00:22:16.334 "iops": 7435.19825787314, 00:22:16.334 "mibps": 29.043743194816955, 00:22:16.334 "io_failed": 0, 00:22:16.334 "io_timeout": 0, 00:22:16.334 "avg_latency_us": 17163.18889075901, 00:22:16.334 "min_latency_us": 323.9563636363636, 00:22:16.334 "max_latency_us": 36223.534545454546 00:22:16.334 } 00:22:16.334 ], 00:22:16.334 "core_count": 1 00:22:16.334 } 00:22:16.334 [2024-11-06 07:58:38.584178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:16.334 07:58:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:22:16.334 [2024-11-06 07:58:38.712533] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:16.334 Running I/O for 4 seconds... 00:22:18.206 5934.00 IOPS, 23.18 MiB/s [2024-11-06T07:58:41.770Z] 5819.50 IOPS, 22.73 MiB/s [2024-11-06T07:58:42.756Z] 5687.33 IOPS, 22.22 MiB/s [2024-11-06T07:58:42.756Z] 5757.00 IOPS, 22.49 MiB/s 00:22:20.127 Latency(us) 00:22:20.127 [2024-11-06T07:58:42.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.127 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:20.127 Verification LBA range: start 0x0 length 0x1400000 00:22:20.127 ftl0 : 4.01 5769.07 22.54 0.00 0.00 22112.07 385.40 28955.00 00:22:20.127 [2024-11-06T07:58:42.756Z] =================================================================================================================== 00:22:20.127 [2024-11-06T07:58:42.756Z] Total : 5769.07 22.54 0.00 0.00 22112.07 0.00 28955.00 00:22:20.127 [2024-11-06 07:58:42.747680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:20.127 { 00:22:20.127 "results": [ 00:22:20.127 { 00:22:20.127 "job": "ftl0", 00:22:20.127 "core_mask": "0x1", 00:22:20.127 "workload": "verify", 00:22:20.127 "status": "finished", 00:22:20.127 "verify_range": { 00:22:20.127 "start": 0, 00:22:20.127 "length": 20971520 00:22:20.127 }, 00:22:20.127 "queue_depth": 128, 00:22:20.127 "io_size": 4096, 00:22:20.127 "runtime": 4.013642, 00:22:20.127 "iops": 5769.074571175008, 00:22:20.127 "mibps": 22.535447543652374, 00:22:20.127 "io_failed": 0, 00:22:20.127 "io_timeout": 0, 00:22:20.127 "avg_latency_us": 22112.07476319664, 00:22:20.127 "min_latency_us": 385.3963636363636, 00:22:20.127 "max_latency_us": 28954.996363636365 00:22:20.127 } 00:22:20.127 ], 00:22:20.127 "core_count": 1 00:22:20.127 } 00:22:20.386 07:58:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:20.645 [2024-11-06 07:58:43.054115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.645 [2024-11-06 07:58:43.054194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:20.645 [2024-11-06 07:58:43.054242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:20.645 [2024-11-06 07:58:43.054288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.645 [2024-11-06 07:58:43.054326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:20.645 [2024-11-06 07:58:43.058069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.645 [2024-11-06 07:58:43.058274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:20.645 [2024-11-06 07:58:43.058313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:22:20.645 [2024-11-06 07:58:43.058328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.645 [2024-11-06 07:58:43.060235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.645 [2024-11-06 07:58:43.060291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:20.645 [2024-11-06 07:58:43.060318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:22:20.645 [2024-11-06 07:58:43.060331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.645 [2024-11-06 07:58:43.255506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.645 [2024-11-06 07:58:43.255591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:20.645 [2024-11-06 07:58:43.255621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 195.137 ms 00:22:20.645 [2024-11-06 07:58:43.255634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.645 [2024-11-06 07:58:43.262184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.645 [2024-11-06 07:58:43.262235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:20.645 [2024-11-06 07:58:43.262271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.492 ms 00:22:20.645 [2024-11-06 07:58:43.262285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.905 [2024-11-06 07:58:43.295723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.905 [2024-11-06 07:58:43.295802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:20.905 [2024-11-06 07:58:43.295827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.278 ms 00:22:20.905 [2024-11-06 07:58:43.295840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.905 [2024-11-06 07:58:43.316424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.905 [2024-11-06 07:58:43.316501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:20.905 [2024-11-06 07:58:43.316531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.490 ms 00:22:20.905 [2024-11-06 07:58:43.316548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.905 [2024-11-06 07:58:43.316801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.905 [2024-11-06 07:58:43.316826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:20.905 [2024-11-06 07:58:43.316847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:22:20.905 [2024-11-06 07:58:43.316859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.905 [2024-11-06 07:58:43.349611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.905 [2024-11-06 07:58:43.349691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:20.905 [2024-11-06 07:58:43.349716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.714 ms 00:22:20.905 [2024-11-06 07:58:43.349729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.905 [2024-11-06 07:58:43.382171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.905 [2024-11-06 07:58:43.382574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:20.905 [2024-11-06 07:58:43.382623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.344 ms 00:22:20.905 [2024-11-06 07:58:43.382639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.905 [2024-11-06 07:58:43.414927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.905 [2024-11-06 07:58:43.415009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:20.905 [2024-11-06 07:58:43.415035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.187 ms 00:22:20.905 [2024-11-06 07:58:43.415047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.905 [2024-11-06 07:58:43.447290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.905 [2024-11-06 07:58:43.447367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:20.905 [2024-11-06 07:58:43.447396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.030 ms 00:22:20.905 [2024-11-06 07:58:43.447410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.905 [2024-11-06 07:58:43.447492] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:20.905 [2024-11-06 07:58:43.447519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:20.905 [2024-11-06 07:58:43.447979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.447991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.448985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:20.906 [2024-11-06 07:58:43.449017] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:20.906 [2024-11-06 07:58:43.449034] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e1cbe1cd-d174-45b0-82db-b1749efbe852 00:22:20.906 [2024-11-06 07:58:43.449047] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:20.906 [2024-11-06 07:58:43.449062] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:20.906 [2024-11-06 07:58:43.449073] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:20.906 [2024-11-06 07:58:43.449093] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:20.906 [2024-11-06 07:58:43.449104] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:20.906 [2024-11-06 07:58:43.449118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:20.906 [2024-11-06 07:58:43.449130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:20.906 [2024-11-06 07:58:43.449145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:20.906 [2024-11-06 07:58:43.449156] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:20.906 [2024-11-06 07:58:43.449171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.906 [2024-11-06 07:58:43.449183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:20.906 [2024-11-06 07:58:43.449199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:22:20.906 [2024-11-06 07:58:43.449210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.906 [2024-11-06 07:58:43.466909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.906 [2024-11-06 07:58:43.467188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:20.906 [2024-11-06 07:58:43.467228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.581 ms 00:22:20.906 [2024-11-06 07:58:43.467242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.906 [2024-11-06 07:58:43.467775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.906 [2024-11-06 07:58:43.467799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:20.906 [2024-11-06 07:58:43.467817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:22:20.906 [2024-11-06 07:58:43.467829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.906 [2024-11-06 07:58:43.515451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.906 [2024-11-06 07:58:43.515527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:20.906 [2024-11-06 07:58:43.515554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.906 [2024-11-06 07:58:43.515568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.907 [2024-11-06 07:58:43.515666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.907 [2024-11-06 07:58:43.515683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:20.907 [2024-11-06 07:58:43.515698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.907 [2024-11-06 07:58:43.515710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.907 [2024-11-06 07:58:43.515858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.907 [2024-11-06 07:58:43.515892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:20.907 [2024-11-06 07:58:43.515908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.907 [2024-11-06 07:58:43.515920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.907 [2024-11-06 07:58:43.515948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:20.907 [2024-11-06 07:58:43.515964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:20.907 [2024-11-06 07:58:43.515979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:20.907 [2024-11-06 07:58:43.515991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.628170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.166 [2024-11-06 07:58:43.628271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.166 [2024-11-06 07:58:43.628302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.166 [2024-11-06 07:58:43.628315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.720340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.166 [2024-11-06 07:58:43.720421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.166 [2024-11-06 07:58:43.720444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.166 [2024-11-06 07:58:43.720456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.720614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.166 [2024-11-06 07:58:43.720635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.166 [2024-11-06 07:58:43.720652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.166 [2024-11-06 07:58:43.720667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.720738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.166 [2024-11-06 07:58:43.720756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.166 [2024-11-06 07:58:43.720772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.166 [2024-11-06 07:58:43.720784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.720919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.166 [2024-11-06 07:58:43.720940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.166 [2024-11-06 07:58:43.720959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.166 [2024-11-06 07:58:43.720975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.721050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.166 [2024-11-06 07:58:43.721069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:21.166 [2024-11-06 07:58:43.721085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.166 [2024-11-06 07:58:43.721097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.721151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.166 [2024-11-06 07:58:43.721166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.166 [2024-11-06 07:58:43.721181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.166 [2024-11-06 07:58:43.721193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.721304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:21.166 [2024-11-06 07:58:43.721338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.166 [2024-11-06 07:58:43.721355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:21.166 [2024-11-06 07:58:43.721367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.166 [2024-11-06 07:58:43.721549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 667.375 ms, result 0 00:22:21.166 true 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75493 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 75493 ']' 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 75493 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75493 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.166 killing process with pid 75493 00:22:21.166 Received shutdown signal, test time was about 4.000000 seconds 00:22:21.166 00:22:21.166 Latency(us) 00:22:21.166 [2024-11-06T07:58:43.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.166 [2024-11-06T07:58:43.795Z] =================================================================================================================== 00:22:21.166 [2024-11-06T07:58:43.795Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75493' 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 75493 00:22:21.166 07:58:43 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 75493 00:22:22.543 Remove shared memory files 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:22.543 00:22:22.543 real 0m24.022s 00:22:22.543 user 0m28.049s 00:22:22.543 sys 0m1.270s 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:22.543 07:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:22.543 ************************************ 00:22:22.543 END TEST ftl_bdevperf 00:22:22.543 ************************************ 00:22:22.543 07:58:44 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:22.543 07:58:44 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:22.543 07:58:44 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:22.543 07:58:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:22.543 ************************************ 00:22:22.543 START TEST ftl_trim 00:22:22.543 ************************************ 00:22:22.543 07:58:44 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:22.543 * Looking for test storage... 00:22:22.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:22.543 07:58:44 ftl.ftl_trim -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:22.543 07:58:44 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # lcov --version 00:22:22.543 07:58:44 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:22.543 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.543 07:58:45 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:22:22.543 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.543 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:22.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.543 --rc genhtml_branch_coverage=1 00:22:22.543 --rc genhtml_function_coverage=1 00:22:22.543 --rc genhtml_legend=1 00:22:22.543 --rc geninfo_all_blocks=1 00:22:22.543 --rc geninfo_unexecuted_blocks=1 00:22:22.543 00:22:22.543 ' 00:22:22.543 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:22.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.543 --rc genhtml_branch_coverage=1 00:22:22.543 --rc genhtml_function_coverage=1 00:22:22.543 --rc genhtml_legend=1 00:22:22.543 --rc geninfo_all_blocks=1 00:22:22.543 --rc geninfo_unexecuted_blocks=1 00:22:22.543 00:22:22.543 ' 00:22:22.543 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:22.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.543 --rc genhtml_branch_coverage=1 00:22:22.543 --rc genhtml_function_coverage=1 00:22:22.543 --rc genhtml_legend=1 00:22:22.543 --rc geninfo_all_blocks=1 00:22:22.543 --rc geninfo_unexecuted_blocks=1 00:22:22.543 00:22:22.543 ' 00:22:22.543 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:22.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.543 --rc genhtml_branch_coverage=1 00:22:22.543 --rc genhtml_function_coverage=1 00:22:22.543 --rc genhtml_legend=1 00:22:22.543 --rc geninfo_all_blocks=1 00:22:22.543 --rc geninfo_unexecuted_blocks=1 00:22:22.543 00:22:22.543 ' 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.543 07:58:45 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75856 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75856 00:22:22.544 07:58:45 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:22.544 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75856 ']' 00:22:22.544 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.544 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.544 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.544 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.544 07:58:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:22.802 [2024-11-06 07:58:45.256076] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:22:22.802 [2024-11-06 07:58:45.256506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75856 ] 00:22:23.061 [2024-11-06 07:58:45.444637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:23.061 [2024-11-06 07:58:45.579932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.061 [2024-11-06 07:58:45.580052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.061 [2024-11-06 07:58:45.580068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.010 07:58:46 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.010 07:58:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:22:24.010 07:58:46 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:24.010 07:58:46 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:24.010 07:58:46 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:24.010 07:58:46 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:24.010 07:58:46 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:24.010 07:58:46 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:24.269 07:58:46 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:24.269 07:58:46 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:24.269 07:58:46 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:24.269 07:58:46 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:24.269 07:58:46 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:24.269 07:58:46 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:24.269 07:58:46 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:24.269 07:58:46 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:24.837 07:58:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:24.837 { 00:22:24.837 "name": "nvme0n1", 00:22:24.837 "aliases": [ 00:22:24.837 "850be99f-c0ef-4336-9239-8a917b61ac0b" 00:22:24.837 ], 00:22:24.837 "product_name": "NVMe disk", 00:22:24.837 "block_size": 4096, 00:22:24.837 "num_blocks": 1310720, 00:22:24.837 "uuid": "850be99f-c0ef-4336-9239-8a917b61ac0b", 00:22:24.837 "numa_id": -1, 00:22:24.837 "assigned_rate_limits": { 00:22:24.837 "rw_ios_per_sec": 0, 00:22:24.837 "rw_mbytes_per_sec": 0, 00:22:24.837 "r_mbytes_per_sec": 0, 00:22:24.837 "w_mbytes_per_sec": 0 00:22:24.837 }, 00:22:24.837 "claimed": true, 00:22:24.837 "claim_type": "read_many_write_one", 00:22:24.837 "zoned": false, 00:22:24.837 "supported_io_types": { 00:22:24.837 "read": true, 00:22:24.837 "write": true, 00:22:24.837 "unmap": true, 00:22:24.837 "flush": true, 00:22:24.837 "reset": true, 00:22:24.837 "nvme_admin": true, 00:22:24.837 "nvme_io": true, 00:22:24.837 "nvme_io_md": false, 00:22:24.837 "write_zeroes": true, 00:22:24.837 "zcopy": false, 00:22:24.837 "get_zone_info": false, 00:22:24.837 "zone_management": false, 00:22:24.837 "zone_append": false, 00:22:24.837 "compare": true, 00:22:24.837 "compare_and_write": false, 00:22:24.837 "abort": true, 00:22:24.837 "seek_hole": false, 00:22:24.837 "seek_data": false, 00:22:24.837 "copy": true, 00:22:24.837 "nvme_iov_md": false 00:22:24.837 }, 00:22:24.837 "driver_specific": { 00:22:24.837 "nvme": [ 00:22:24.837 { 00:22:24.837 "pci_address": "0000:00:11.0", 00:22:24.837 "trid": { 00:22:24.837 "trtype": "PCIe", 00:22:24.837 "traddr": "0000:00:11.0" 00:22:24.837 }, 00:22:24.837 "ctrlr_data": { 00:22:24.837 "cntlid": 0, 00:22:24.837 "vendor_id": "0x1b36", 00:22:24.837 "model_number": "QEMU NVMe Ctrl", 00:22:24.837 "serial_number": "12341", 00:22:24.837 "firmware_revision": "8.0.0", 00:22:24.837 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:24.837 "oacs": { 00:22:24.837 "security": 0, 00:22:24.837 "format": 1, 00:22:24.837 "firmware": 0, 00:22:24.837 "ns_manage": 1 00:22:24.837 }, 00:22:24.837 "multi_ctrlr": false, 00:22:24.837 "ana_reporting": false 00:22:24.837 }, 00:22:24.837 "vs": { 00:22:24.837 "nvme_version": "1.4" 00:22:24.837 }, 00:22:24.837 "ns_data": { 00:22:24.837 "id": 1, 00:22:24.837 "can_share": false 00:22:24.837 } 00:22:24.837 } 00:22:24.837 ], 00:22:24.837 "mp_policy": "active_passive" 00:22:24.837 } 00:22:24.837 } 00:22:24.837 ]' 00:22:24.837 07:58:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:24.837 07:58:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:24.837 07:58:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:24.837 07:58:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:24.837 07:58:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:24.837 07:58:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:22:24.837 07:58:47 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:24.837 07:58:47 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:24.837 07:58:47 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:24.837 07:58:47 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:24.837 07:58:47 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:25.095 07:58:47 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=fb2561e6-8466-4e9a-bea8-9e6126916020 00:22:25.096 07:58:47 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:25.096 07:58:47 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb2561e6-8466-4e9a-bea8-9e6126916020 00:22:25.354 07:58:47 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:25.612 07:58:48 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ebbe1287-1ea8-499a-81dd-b031ec62a928 00:22:25.612 07:58:48 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ebbe1287-1ea8-499a-81dd-b031ec62a928 00:22:25.871 07:58:48 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:25.872 07:58:48 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:25.872 07:58:48 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:25.872 07:58:48 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:25.872 07:58:48 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:25.872 07:58:48 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:25.872 07:58:48 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:25.872 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:25.872 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:25.872 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:25.872 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:25.872 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:26.439 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:26.439 { 00:22:26.439 "name": "708ee8d1-aa32-4a28-aeae-419cc844ea84", 00:22:26.439 "aliases": [ 00:22:26.439 "lvs/nvme0n1p0" 00:22:26.439 ], 00:22:26.439 "product_name": "Logical Volume", 00:22:26.439 "block_size": 4096, 00:22:26.439 "num_blocks": 26476544, 00:22:26.439 "uuid": "708ee8d1-aa32-4a28-aeae-419cc844ea84", 00:22:26.439 "assigned_rate_limits": { 00:22:26.439 "rw_ios_per_sec": 0, 00:22:26.439 "rw_mbytes_per_sec": 0, 00:22:26.439 "r_mbytes_per_sec": 0, 00:22:26.439 "w_mbytes_per_sec": 0 00:22:26.439 }, 00:22:26.439 "claimed": false, 00:22:26.439 "zoned": false, 00:22:26.439 "supported_io_types": { 00:22:26.439 "read": true, 00:22:26.439 "write": true, 00:22:26.439 "unmap": true, 00:22:26.439 "flush": false, 00:22:26.439 "reset": true, 00:22:26.439 "nvme_admin": false, 00:22:26.439 "nvme_io": false, 00:22:26.439 "nvme_io_md": false, 00:22:26.439 "write_zeroes": true, 00:22:26.439 "zcopy": false, 00:22:26.439 "get_zone_info": false, 00:22:26.439 "zone_management": false, 00:22:26.439 "zone_append": false, 00:22:26.439 "compare": false, 00:22:26.439 "compare_and_write": false, 00:22:26.439 "abort": false, 00:22:26.439 "seek_hole": true, 00:22:26.439 "seek_data": true, 00:22:26.439 "copy": false, 00:22:26.439 "nvme_iov_md": false 00:22:26.439 }, 00:22:26.439 "driver_specific": { 00:22:26.439 "lvol": { 00:22:26.439 "lvol_store_uuid": "ebbe1287-1ea8-499a-81dd-b031ec62a928", 00:22:26.439 "base_bdev": "nvme0n1", 00:22:26.439 "thin_provision": true, 00:22:26.439 "num_allocated_clusters": 0, 00:22:26.439 "snapshot": false, 00:22:26.439 "clone": false, 00:22:26.439 "esnap_clone": false 00:22:26.439 } 00:22:26.439 } 00:22:26.439 } 00:22:26.439 ]' 00:22:26.439 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:26.439 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:26.439 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:26.439 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:26.439 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:26.439 07:58:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:26.439 07:58:48 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:26.439 07:58:48 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:26.439 07:58:48 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:26.697 07:58:49 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:26.697 07:58:49 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:26.697 07:58:49 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:26.698 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:26.698 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:26.698 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:26.698 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:26.698 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:26.956 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:26.956 { 00:22:26.956 "name": "708ee8d1-aa32-4a28-aeae-419cc844ea84", 00:22:26.956 "aliases": [ 00:22:26.956 "lvs/nvme0n1p0" 00:22:26.956 ], 00:22:26.956 "product_name": "Logical Volume", 00:22:26.956 "block_size": 4096, 00:22:26.956 "num_blocks": 26476544, 00:22:26.956 "uuid": "708ee8d1-aa32-4a28-aeae-419cc844ea84", 00:22:26.956 "assigned_rate_limits": { 00:22:26.956 "rw_ios_per_sec": 0, 00:22:26.956 "rw_mbytes_per_sec": 0, 00:22:26.956 "r_mbytes_per_sec": 0, 00:22:26.956 "w_mbytes_per_sec": 0 00:22:26.956 }, 00:22:26.956 "claimed": false, 00:22:26.956 "zoned": false, 00:22:26.956 "supported_io_types": { 00:22:26.956 "read": true, 00:22:26.956 "write": true, 00:22:26.956 "unmap": true, 00:22:26.956 "flush": false, 00:22:26.956 "reset": true, 00:22:26.956 "nvme_admin": false, 00:22:26.956 "nvme_io": false, 00:22:26.956 "nvme_io_md": false, 00:22:26.956 "write_zeroes": true, 00:22:26.956 "zcopy": false, 00:22:26.956 "get_zone_info": false, 00:22:26.956 "zone_management": false, 00:22:26.956 "zone_append": false, 00:22:26.956 "compare": false, 00:22:26.956 "compare_and_write": false, 00:22:26.956 "abort": false, 00:22:26.956 "seek_hole": true, 00:22:26.956 "seek_data": true, 00:22:26.956 "copy": false, 00:22:26.956 "nvme_iov_md": false 00:22:26.956 }, 00:22:26.956 "driver_specific": { 00:22:26.956 "lvol": { 00:22:26.956 "lvol_store_uuid": "ebbe1287-1ea8-499a-81dd-b031ec62a928", 00:22:26.956 "base_bdev": "nvme0n1", 00:22:26.956 "thin_provision": true, 00:22:26.956 "num_allocated_clusters": 0, 00:22:26.956 "snapshot": false, 00:22:26.956 "clone": false, 00:22:26.956 "esnap_clone": false 00:22:26.956 } 00:22:26.956 } 00:22:26.956 } 00:22:26.956 ]' 00:22:26.956 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:26.956 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:26.956 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:26.956 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:26.956 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:26.956 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:26.956 07:58:49 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:26.956 07:58:49 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:27.214 07:58:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:27.214 07:58:49 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:27.214 07:58:49 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:27.214 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:27.214 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:27.214 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:22:27.214 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:22:27.214 07:58:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 708ee8d1-aa32-4a28-aeae-419cc844ea84 00:22:27.801 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:27.801 { 00:22:27.801 "name": "708ee8d1-aa32-4a28-aeae-419cc844ea84", 00:22:27.801 "aliases": [ 00:22:27.801 "lvs/nvme0n1p0" 00:22:27.801 ], 00:22:27.801 "product_name": "Logical Volume", 00:22:27.801 "block_size": 4096, 00:22:27.801 "num_blocks": 26476544, 00:22:27.801 "uuid": "708ee8d1-aa32-4a28-aeae-419cc844ea84", 00:22:27.801 "assigned_rate_limits": { 00:22:27.801 "rw_ios_per_sec": 0, 00:22:27.801 "rw_mbytes_per_sec": 0, 00:22:27.801 "r_mbytes_per_sec": 0, 00:22:27.801 "w_mbytes_per_sec": 0 00:22:27.801 }, 00:22:27.801 "claimed": false, 00:22:27.801 "zoned": false, 00:22:27.801 "supported_io_types": { 00:22:27.801 "read": true, 00:22:27.801 "write": true, 00:22:27.801 "unmap": true, 00:22:27.801 "flush": false, 00:22:27.801 "reset": true, 00:22:27.801 "nvme_admin": false, 00:22:27.801 "nvme_io": false, 00:22:27.801 "nvme_io_md": false, 00:22:27.801 "write_zeroes": true, 00:22:27.801 "zcopy": false, 00:22:27.801 "get_zone_info": false, 00:22:27.801 "zone_management": false, 00:22:27.801 "zone_append": false, 00:22:27.801 "compare": false, 00:22:27.801 "compare_and_write": false, 00:22:27.801 "abort": false, 00:22:27.801 "seek_hole": true, 00:22:27.801 "seek_data": true, 00:22:27.801 "copy": false, 00:22:27.801 "nvme_iov_md": false 00:22:27.801 }, 00:22:27.801 "driver_specific": { 00:22:27.801 "lvol": { 00:22:27.801 "lvol_store_uuid": "ebbe1287-1ea8-499a-81dd-b031ec62a928", 00:22:27.801 "base_bdev": "nvme0n1", 00:22:27.801 "thin_provision": true, 00:22:27.801 "num_allocated_clusters": 0, 00:22:27.801 "snapshot": false, 00:22:27.801 "clone": false, 00:22:27.801 "esnap_clone": false 00:22:27.801 } 00:22:27.801 } 00:22:27.801 } 00:22:27.801 ]' 00:22:27.801 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:27.801 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:22:27.801 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:27.801 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:27.801 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:27.801 07:58:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:22:27.801 07:58:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:27.801 07:58:50 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 708ee8d1-aa32-4a28-aeae-419cc844ea84 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:28.060 [2024-11-06 07:58:50.517909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.060 [2024-11-06 07:58:50.517980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:28.060 [2024-11-06 07:58:50.518008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:28.060 [2024-11-06 07:58:50.518022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.060 [2024-11-06 07:58:50.521784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.060 [2024-11-06 07:58:50.521846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.060 [2024-11-06 07:58:50.521868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.722 ms 00:22:28.060 [2024-11-06 07:58:50.521882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.060 [2024-11-06 07:58:50.522067] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:28.060 [2024-11-06 07:58:50.523101] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:28.060 [2024-11-06 07:58:50.523151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.523167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.061 [2024-11-06 07:58:50.523183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:22:28.061 [2024-11-06 07:58:50.523196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.523750] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a89fd864-aa96-428e-abd4-47c1715fad37 00:22:28.061 [2024-11-06 07:58:50.525788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.525837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:28.061 [2024-11-06 07:58:50.525856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:28.061 [2024-11-06 07:58:50.525872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.535795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.535878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.061 [2024-11-06 07:58:50.535898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.812 ms 00:22:28.061 [2024-11-06 07:58:50.535918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.536176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.536205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.061 [2024-11-06 07:58:50.536220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:22:28.061 [2024-11-06 07:58:50.536241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.536328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.536350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:28.061 [2024-11-06 07:58:50.536364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:28.061 [2024-11-06 07:58:50.536379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.536430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:28.061 [2024-11-06 07:58:50.541796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.541850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.061 [2024-11-06 07:58:50.541871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.373 ms 00:22:28.061 [2024-11-06 07:58:50.541890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.542008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.542028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:28.061 [2024-11-06 07:58:50.542044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:28.061 [2024-11-06 07:58:50.542083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.542133] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:28.061 [2024-11-06 07:58:50.542322] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:28.061 [2024-11-06 07:58:50.542367] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:28.061 [2024-11-06 07:58:50.542386] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:28.061 [2024-11-06 07:58:50.542404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:28.061 [2024-11-06 07:58:50.542419] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:28.061 [2024-11-06 07:58:50.542436] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:28.061 [2024-11-06 07:58:50.542448] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:28.061 [2024-11-06 07:58:50.542462] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:28.061 [2024-11-06 07:58:50.542474] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:28.061 [2024-11-06 07:58:50.542491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.542506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:28.061 [2024-11-06 07:58:50.542522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:22:28.061 [2024-11-06 07:58:50.542533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.542647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.061 [2024-11-06 07:58:50.542664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:28.061 [2024-11-06 07:58:50.542680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:28.061 [2024-11-06 07:58:50.542692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.061 [2024-11-06 07:58:50.542832] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:28.061 [2024-11-06 07:58:50.542849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:28.061 [2024-11-06 07:58:50.542868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.061 [2024-11-06 07:58:50.542881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.061 [2024-11-06 07:58:50.542896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:28.061 [2024-11-06 07:58:50.542907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:28.061 [2024-11-06 07:58:50.542921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:28.061 [2024-11-06 07:58:50.542933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:28.061 [2024-11-06 07:58:50.542946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:28.061 [2024-11-06 07:58:50.542957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.061 [2024-11-06 07:58:50.542970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:28.061 [2024-11-06 07:58:50.542981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:28.061 [2024-11-06 07:58:50.542995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:28.061 [2024-11-06 07:58:50.543007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:28.061 [2024-11-06 07:58:50.543021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:28.061 [2024-11-06 07:58:50.543032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:28.061 [2024-11-06 07:58:50.543060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:28.061 [2024-11-06 07:58:50.543074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:28.061 [2024-11-06 07:58:50.543102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.061 [2024-11-06 07:58:50.543127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:28.061 [2024-11-06 07:58:50.543139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.061 [2024-11-06 07:58:50.543163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:28.061 [2024-11-06 07:58:50.543176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.061 [2024-11-06 07:58:50.543199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:28.061 [2024-11-06 07:58:50.543211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:28.061 [2024-11-06 07:58:50.543234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:28.061 [2024-11-06 07:58:50.543262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.061 [2024-11-06 07:58:50.543290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:28.061 [2024-11-06 07:58:50.543301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:28.061 [2024-11-06 07:58:50.543314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:28.061 [2024-11-06 07:58:50.543325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:28.061 [2024-11-06 07:58:50.543339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:28.061 [2024-11-06 07:58:50.543350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:28.061 [2024-11-06 07:58:50.543374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:28.061 [2024-11-06 07:58:50.543388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543399] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:28.061 [2024-11-06 07:58:50.543414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:28.061 [2024-11-06 07:58:50.543425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:28.061 [2024-11-06 07:58:50.543439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:28.061 [2024-11-06 07:58:50.543452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:28.061 [2024-11-06 07:58:50.543471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:28.061 [2024-11-06 07:58:50.543482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:28.061 [2024-11-06 07:58:50.543496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:28.061 [2024-11-06 07:58:50.543508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:28.061 [2024-11-06 07:58:50.543522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:28.061 [2024-11-06 07:58:50.543539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:28.061 [2024-11-06 07:58:50.543567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.061 [2024-11-06 07:58:50.543581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:28.061 [2024-11-06 07:58:50.543595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:28.062 [2024-11-06 07:58:50.543607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:28.062 [2024-11-06 07:58:50.543621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:28.062 [2024-11-06 07:58:50.543634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:28.062 [2024-11-06 07:58:50.543648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:28.062 [2024-11-06 07:58:50.543660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:28.062 [2024-11-06 07:58:50.543674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:28.062 [2024-11-06 07:58:50.543686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:28.062 [2024-11-06 07:58:50.543703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:28.062 [2024-11-06 07:58:50.543714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:28.062 [2024-11-06 07:58:50.543728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:28.062 [2024-11-06 07:58:50.543740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:28.062 [2024-11-06 07:58:50.543755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:28.062 [2024-11-06 07:58:50.543767] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:28.062 [2024-11-06 07:58:50.543783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:28.062 [2024-11-06 07:58:50.543797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:28.062 [2024-11-06 07:58:50.543813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:28.062 [2024-11-06 07:58:50.543825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:28.062 [2024-11-06 07:58:50.543840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:28.062 [2024-11-06 07:58:50.543854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.062 [2024-11-06 07:58:50.543876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:28.062 [2024-11-06 07:58:50.543888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:22:28.062 [2024-11-06 07:58:50.543902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.062 [2024-11-06 07:58:50.544009] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:28.062 [2024-11-06 07:58:50.544032] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:31.345 [2024-11-06 07:58:53.537387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.537486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:31.345 [2024-11-06 07:58:53.537514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2993.385 ms 00:22:31.345 [2024-11-06 07:58:53.537530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.578193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.578295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.345 [2024-11-06 07:58:53.578326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.253 ms 00:22:31.345 [2024-11-06 07:58:53.578343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.578569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.578596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:31.345 [2024-11-06 07:58:53.578611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:31.345 [2024-11-06 07:58:53.578630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.633269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.633629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.345 [2024-11-06 07:58:53.633669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.549 ms 00:22:31.345 [2024-11-06 07:58:53.633689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.633891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.633917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.345 [2024-11-06 07:58:53.633932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:31.345 [2024-11-06 07:58:53.633947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.634576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.634602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.345 [2024-11-06 07:58:53.634617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:22:31.345 [2024-11-06 07:58:53.634637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.634812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.634832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.345 [2024-11-06 07:58:53.634845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:22:31.345 [2024-11-06 07:58:53.634863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.656867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.657290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.345 [2024-11-06 07:58:53.657327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.935 ms 00:22:31.345 [2024-11-06 07:58:53.657345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.675126] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:31.345 [2024-11-06 07:58:53.697858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.697947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:31.345 [2024-11-06 07:58:53.697973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.295 ms 00:22:31.345 [2024-11-06 07:58:53.697991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.789115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.789210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:31.345 [2024-11-06 07:58:53.789236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.945 ms 00:22:31.345 [2024-11-06 07:58:53.789278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.789661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.789685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:31.345 [2024-11-06 07:58:53.789707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:22:31.345 [2024-11-06 07:58:53.789719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.824850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.345 [2024-11-06 07:58:53.824963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:31.345 [2024-11-06 07:58:53.824997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.062 ms 00:22:31.345 [2024-11-06 07:58:53.825020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.345 [2024-11-06 07:58:53.859920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.346 [2024-11-06 07:58:53.860007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:31.346 [2024-11-06 07:58:53.860034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.693 ms 00:22:31.346 [2024-11-06 07:58:53.860048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.346 [2024-11-06 07:58:53.861101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.346 [2024-11-06 07:58:53.861132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:31.346 [2024-11-06 07:58:53.861151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:22:31.346 [2024-11-06 07:58:53.861164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.346 [2024-11-06 07:58:53.959244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.346 [2024-11-06 07:58:53.959342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:31.346 [2024-11-06 07:58:53.959372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.024 ms 00:22:31.346 [2024-11-06 07:58:53.959391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.605 [2024-11-06 07:58:53.995404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.605 [2024-11-06 07:58:53.995495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:31.605 [2024-11-06 07:58:53.995529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.751 ms 00:22:31.605 [2024-11-06 07:58:53.995543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.605 [2024-11-06 07:58:54.030924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.605 [2024-11-06 07:58:54.031009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:31.605 [2024-11-06 07:58:54.031035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.174 ms 00:22:31.605 [2024-11-06 07:58:54.031048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.605 [2024-11-06 07:58:54.066309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.605 [2024-11-06 07:58:54.066383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:31.605 [2024-11-06 07:58:54.066410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.066 ms 00:22:31.605 [2024-11-06 07:58:54.066443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.605 [2024-11-06 07:58:54.066641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.605 [2024-11-06 07:58:54.066664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:31.605 [2024-11-06 07:58:54.066687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:31.605 [2024-11-06 07:58:54.066704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.605 [2024-11-06 07:58:54.066822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.605 [2024-11-06 07:58:54.066846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:31.605 [2024-11-06 07:58:54.066863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:31.605 [2024-11-06 07:58:54.066880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.605 [2024-11-06 07:58:54.068172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:31.605 [2024-11-06 07:58:54.073952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3549.914 ms, result 0 00:22:31.605 [2024-11-06 07:58:54.075083] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:31.605 { 00:22:31.605 "name": "ftl0", 00:22:31.605 "uuid": "a89fd864-aa96-428e-abd4-47c1715fad37" 00:22:31.605 } 00:22:31.605 07:58:54 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:31.605 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:22:31.605 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:31.605 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:22:31.605 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:31.605 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:31.605 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:31.864 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:32.123 [ 00:22:32.123 { 00:22:32.123 "name": "ftl0", 00:22:32.123 "aliases": [ 00:22:32.123 "a89fd864-aa96-428e-abd4-47c1715fad37" 00:22:32.123 ], 00:22:32.123 "product_name": "FTL disk", 00:22:32.123 "block_size": 4096, 00:22:32.123 "num_blocks": 23592960, 00:22:32.123 "uuid": "a89fd864-aa96-428e-abd4-47c1715fad37", 00:22:32.123 "assigned_rate_limits": { 00:22:32.123 "rw_ios_per_sec": 0, 00:22:32.123 "rw_mbytes_per_sec": 0, 00:22:32.123 "r_mbytes_per_sec": 0, 00:22:32.123 "w_mbytes_per_sec": 0 00:22:32.123 }, 00:22:32.123 "claimed": false, 00:22:32.123 "zoned": false, 00:22:32.123 "supported_io_types": { 00:22:32.123 "read": true, 00:22:32.123 "write": true, 00:22:32.123 "unmap": true, 00:22:32.123 "flush": true, 00:22:32.123 "reset": false, 00:22:32.123 "nvme_admin": false, 00:22:32.123 "nvme_io": false, 00:22:32.123 "nvme_io_md": false, 00:22:32.123 "write_zeroes": true, 00:22:32.123 "zcopy": false, 00:22:32.123 "get_zone_info": false, 00:22:32.123 "zone_management": false, 00:22:32.123 "zone_append": false, 00:22:32.123 "compare": false, 00:22:32.123 "compare_and_write": false, 00:22:32.123 "abort": false, 00:22:32.123 "seek_hole": false, 00:22:32.123 "seek_data": false, 00:22:32.123 "copy": false, 00:22:32.123 "nvme_iov_md": false 00:22:32.123 }, 00:22:32.123 "driver_specific": { 00:22:32.123 "ftl": { 00:22:32.123 "base_bdev": "708ee8d1-aa32-4a28-aeae-419cc844ea84", 00:22:32.123 "cache": "nvc0n1p0" 00:22:32.123 } 00:22:32.123 } 00:22:32.123 } 00:22:32.123 ] 00:22:32.381 07:58:54 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:22:32.381 07:58:54 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:32.381 07:58:54 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:32.646 07:58:55 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:32.646 07:58:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:32.904 07:58:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:32.904 { 00:22:32.904 "name": "ftl0", 00:22:32.904 "aliases": [ 00:22:32.904 "a89fd864-aa96-428e-abd4-47c1715fad37" 00:22:32.904 ], 00:22:32.904 "product_name": "FTL disk", 00:22:32.904 "block_size": 4096, 00:22:32.904 "num_blocks": 23592960, 00:22:32.904 "uuid": "a89fd864-aa96-428e-abd4-47c1715fad37", 00:22:32.904 "assigned_rate_limits": { 00:22:32.904 "rw_ios_per_sec": 0, 00:22:32.904 "rw_mbytes_per_sec": 0, 00:22:32.904 "r_mbytes_per_sec": 0, 00:22:32.904 "w_mbytes_per_sec": 0 00:22:32.904 }, 00:22:32.904 "claimed": false, 00:22:32.904 "zoned": false, 00:22:32.904 "supported_io_types": { 00:22:32.904 "read": true, 00:22:32.904 "write": true, 00:22:32.904 "unmap": true, 00:22:32.904 "flush": true, 00:22:32.904 "reset": false, 00:22:32.904 "nvme_admin": false, 00:22:32.904 "nvme_io": false, 00:22:32.904 "nvme_io_md": false, 00:22:32.904 "write_zeroes": true, 00:22:32.904 "zcopy": false, 00:22:32.904 "get_zone_info": false, 00:22:32.904 "zone_management": false, 00:22:32.904 "zone_append": false, 00:22:32.904 "compare": false, 00:22:32.904 "compare_and_write": false, 00:22:32.904 "abort": false, 00:22:32.904 "seek_hole": false, 00:22:32.904 "seek_data": false, 00:22:32.904 "copy": false, 00:22:32.904 "nvme_iov_md": false 00:22:32.904 }, 00:22:32.904 "driver_specific": { 00:22:32.904 "ftl": { 00:22:32.904 "base_bdev": "708ee8d1-aa32-4a28-aeae-419cc844ea84", 00:22:32.904 "cache": "nvc0n1p0" 00:22:32.904 } 00:22:32.904 } 00:22:32.904 } 00:22:32.904 ]' 00:22:32.904 07:58:55 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:32.904 07:58:55 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:32.904 07:58:55 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:33.163 [2024-11-06 07:58:55.664822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.665190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:33.163 [2024-11-06 07:58:55.665224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:33.163 [2024-11-06 07:58:55.665242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.163 [2024-11-06 07:58:55.665341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:33.163 [2024-11-06 07:58:55.669082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.669129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:33.163 [2024-11-06 07:58:55.669156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.705 ms 00:22:33.163 [2024-11-06 07:58:55.669169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.163 [2024-11-06 07:58:55.669867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.669905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:33.163 [2024-11-06 07:58:55.669925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:22:33.163 [2024-11-06 07:58:55.669941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.163 [2024-11-06 07:58:55.673571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.673612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:33.163 [2024-11-06 07:58:55.673630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:22:33.163 [2024-11-06 07:58:55.673646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.163 [2024-11-06 07:58:55.681282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.681355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:33.163 [2024-11-06 07:58:55.681377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.515 ms 00:22:33.163 [2024-11-06 07:58:55.681390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.163 [2024-11-06 07:58:55.716586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.716672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:33.163 [2024-11-06 07:58:55.716702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.048 ms 00:22:33.163 [2024-11-06 07:58:55.716715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.163 [2024-11-06 07:58:55.738866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.738962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:33.163 [2024-11-06 07:58:55.738990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.939 ms 00:22:33.163 [2024-11-06 07:58:55.739004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.163 [2024-11-06 07:58:55.739421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.739452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:33.163 [2024-11-06 07:58:55.739471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:22:33.163 [2024-11-06 07:58:55.739484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.163 [2024-11-06 07:58:55.774960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.163 [2024-11-06 07:58:55.775374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:33.163 [2024-11-06 07:58:55.775415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.414 ms 00:22:33.163 [2024-11-06 07:58:55.775430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.423 [2024-11-06 07:58:55.810779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.423 [2024-11-06 07:58:55.811167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:33.423 [2024-11-06 07:58:55.811212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.096 ms 00:22:33.423 [2024-11-06 07:58:55.811226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.423 [2024-11-06 07:58:55.846002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.423 [2024-11-06 07:58:55.846429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:33.423 [2024-11-06 07:58:55.846470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.552 ms 00:22:33.423 [2024-11-06 07:58:55.846494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.423 [2024-11-06 07:58:55.882154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.423 [2024-11-06 07:58:55.882535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:33.423 [2024-11-06 07:58:55.882576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.380 ms 00:22:33.423 [2024-11-06 07:58:55.882591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.423 [2024-11-06 07:58:55.882822] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:33.423 [2024-11-06 07:58:55.882854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.882992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:33.423 [2024-11-06 07:58:55.883601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.883999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:33.424 [2024-11-06 07:58:55.884359] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:33.424 [2024-11-06 07:58:55.884377] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a89fd864-aa96-428e-abd4-47c1715fad37 00:22:33.424 [2024-11-06 07:58:55.884390] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:33.424 [2024-11-06 07:58:55.884405] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:33.424 [2024-11-06 07:58:55.884433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:33.424 [2024-11-06 07:58:55.884449] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:33.424 [2024-11-06 07:58:55.884461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:33.424 [2024-11-06 07:58:55.884480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:33.424 [2024-11-06 07:58:55.884492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:33.424 [2024-11-06 07:58:55.884505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:33.424 [2024-11-06 07:58:55.884516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:33.424 [2024-11-06 07:58:55.884531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.424 [2024-11-06 07:58:55.884543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:33.424 [2024-11-06 07:58:55.884559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:22:33.424 [2024-11-06 07:58:55.884571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.424 [2024-11-06 07:58:55.903007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.424 [2024-11-06 07:58:55.903088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:33.424 [2024-11-06 07:58:55.903117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.373 ms 00:22:33.424 [2024-11-06 07:58:55.903134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.424 [2024-11-06 07:58:55.903757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:33.424 [2024-11-06 07:58:55.903793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:33.424 [2024-11-06 07:58:55.903813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:22:33.424 [2024-11-06 07:58:55.903826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.424 [2024-11-06 07:58:55.964254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.424 [2024-11-06 07:58:55.964338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:33.424 [2024-11-06 07:58:55.964362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.424 [2024-11-06 07:58:55.964380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.424 [2024-11-06 07:58:55.964567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.424 [2024-11-06 07:58:55.964588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:33.424 [2024-11-06 07:58:55.964604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.424 [2024-11-06 07:58:55.964617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.424 [2024-11-06 07:58:55.964728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.424 [2024-11-06 07:58:55.964749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:33.424 [2024-11-06 07:58:55.964769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.424 [2024-11-06 07:58:55.964781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.424 [2024-11-06 07:58:55.964826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.424 [2024-11-06 07:58:55.964841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:33.424 [2024-11-06 07:58:55.964856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.424 [2024-11-06 07:58:55.964868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.683 [2024-11-06 07:58:56.084192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.683 [2024-11-06 07:58:56.084292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:33.683 [2024-11-06 07:58:56.084328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.683 [2024-11-06 07:58:56.084346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.683 [2024-11-06 07:58:56.178674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.683 [2024-11-06 07:58:56.178764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:33.683 [2024-11-06 07:58:56.178792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.683 [2024-11-06 07:58:56.178805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.683 [2024-11-06 07:58:56.178956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.683 [2024-11-06 07:58:56.178981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:33.683 [2024-11-06 07:58:56.179026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.683 [2024-11-06 07:58:56.179039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.683 [2024-11-06 07:58:56.179112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.683 [2024-11-06 07:58:56.179131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:33.683 [2024-11-06 07:58:56.179146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.683 [2024-11-06 07:58:56.179158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.683 [2024-11-06 07:58:56.179361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.684 [2024-11-06 07:58:56.179384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:33.684 [2024-11-06 07:58:56.179402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.684 [2024-11-06 07:58:56.179414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.684 [2024-11-06 07:58:56.179504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.684 [2024-11-06 07:58:56.179528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:33.684 [2024-11-06 07:58:56.179544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.684 [2024-11-06 07:58:56.179556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.684 [2024-11-06 07:58:56.179625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.684 [2024-11-06 07:58:56.179641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:33.684 [2024-11-06 07:58:56.179667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.684 [2024-11-06 07:58:56.179678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.684 [2024-11-06 07:58:56.179757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:33.684 [2024-11-06 07:58:56.179779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:33.684 [2024-11-06 07:58:56.179794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:33.684 [2024-11-06 07:58:56.179807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:33.684 [2024-11-06 07:58:56.180046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 515.208 ms, result 0 00:22:33.684 true 00:22:33.684 07:58:56 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75856 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75856 ']' 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75856 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75856 00:22:33.684 killing process with pid 75856 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75856' 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75856 00:22:33.684 07:58:56 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75856 00:22:38.960 07:59:01 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:39.893 65536+0 records in 00:22:39.893 65536+0 records out 00:22:39.893 268435456 bytes (268 MB, 256 MiB) copied, 1.28231 s, 209 MB/s 00:22:39.893 07:59:02 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:39.893 [2024-11-06 07:59:02.519494] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:22:39.893 [2024-11-06 07:59:02.519857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76078 ] 00:22:40.151 [2024-11-06 07:59:02.702745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.410 [2024-11-06 07:59:02.853504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.668 [2024-11-06 07:59:03.227583] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:40.668 [2024-11-06 07:59:03.227672] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:40.928 [2024-11-06 07:59:03.394748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.395099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:40.928 [2024-11-06 07:59:03.395133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:40.928 [2024-11-06 07:59:03.395149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.398886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.398934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.928 [2024-11-06 07:59:03.398954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.695 ms 00:22:40.928 [2024-11-06 07:59:03.398967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.399131] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:40.928 [2024-11-06 07:59:03.400124] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:40.928 [2024-11-06 07:59:03.400168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.400184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.928 [2024-11-06 07:59:03.400198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:22:40.928 [2024-11-06 07:59:03.400210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.402289] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:40.928 [2024-11-06 07:59:03.419413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.419491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:40.928 [2024-11-06 07:59:03.419519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.123 ms 00:22:40.928 [2024-11-06 07:59:03.419532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.419733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.419757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:40.928 [2024-11-06 07:59:03.419772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:40.928 [2024-11-06 07:59:03.419790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.429074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.429152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.928 [2024-11-06 07:59:03.429171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.191 ms 00:22:40.928 [2024-11-06 07:59:03.429205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.429433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.429457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.928 [2024-11-06 07:59:03.429473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:40.928 [2024-11-06 07:59:03.429485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.429534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.429550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:40.928 [2024-11-06 07:59:03.429569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:40.928 [2024-11-06 07:59:03.429581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.429616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:40.928 [2024-11-06 07:59:03.435000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.435199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.928 [2024-11-06 07:59:03.435342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.394 ms 00:22:40.928 [2024-11-06 07:59:03.435396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.435632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.435693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:40.928 [2024-11-06 07:59:03.435736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:40.928 [2024-11-06 07:59:03.435916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.435998] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:40.928 [2024-11-06 07:59:03.436064] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:40.928 [2024-11-06 07:59:03.436316] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:40.928 [2024-11-06 07:59:03.436479] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:40.928 [2024-11-06 07:59:03.436728] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:40.928 [2024-11-06 07:59:03.436769] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:40.928 [2024-11-06 07:59:03.436787] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:40.928 [2024-11-06 07:59:03.436805] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:40.928 [2024-11-06 07:59:03.436828] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:40.928 [2024-11-06 07:59:03.436848] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:40.928 [2024-11-06 07:59:03.436860] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:40.928 [2024-11-06 07:59:03.436871] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:40.928 [2024-11-06 07:59:03.436883] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:40.928 [2024-11-06 07:59:03.436899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.436912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:40.928 [2024-11-06 07:59:03.436926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:22:40.928 [2024-11-06 07:59:03.436937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.437059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.928 [2024-11-06 07:59:03.437079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:40.928 [2024-11-06 07:59:03.437092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:40.928 [2024-11-06 07:59:03.437110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.928 [2024-11-06 07:59:03.437228] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:40.928 [2024-11-06 07:59:03.437273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:40.928 [2024-11-06 07:59:03.437308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:40.928 [2024-11-06 07:59:03.437334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.928 [2024-11-06 07:59:03.437357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:40.928 [2024-11-06 07:59:03.437373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:40.928 [2024-11-06 07:59:03.437389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:40.928 [2024-11-06 07:59:03.437405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:40.928 [2024-11-06 07:59:03.437420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:40.928 [2024-11-06 07:59:03.437436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:40.928 [2024-11-06 07:59:03.437451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:40.928 [2024-11-06 07:59:03.437466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:40.928 [2024-11-06 07:59:03.437482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:40.928 [2024-11-06 07:59:03.437517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:40.928 [2024-11-06 07:59:03.437534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:40.928 [2024-11-06 07:59:03.437549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.928 [2024-11-06 07:59:03.437565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:40.928 [2024-11-06 07:59:03.437581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:40.928 [2024-11-06 07:59:03.437597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.929 [2024-11-06 07:59:03.437614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:40.929 [2024-11-06 07:59:03.437630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:40.929 [2024-11-06 07:59:03.437646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.929 [2024-11-06 07:59:03.437661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:40.929 [2024-11-06 07:59:03.437676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:40.929 [2024-11-06 07:59:03.437691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.929 [2024-11-06 07:59:03.437708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:40.929 [2024-11-06 07:59:03.437724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:40.929 [2024-11-06 07:59:03.437740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.929 [2024-11-06 07:59:03.437755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:40.929 [2024-11-06 07:59:03.437771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:40.929 [2024-11-06 07:59:03.437787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:40.929 [2024-11-06 07:59:03.437802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:40.929 [2024-11-06 07:59:03.437817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:40.929 [2024-11-06 07:59:03.437833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:40.929 [2024-11-06 07:59:03.437848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:40.929 [2024-11-06 07:59:03.437864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:40.929 [2024-11-06 07:59:03.437880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:40.929 [2024-11-06 07:59:03.437895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:40.929 [2024-11-06 07:59:03.437911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:40.929 [2024-11-06 07:59:03.437927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.929 [2024-11-06 07:59:03.437942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:40.929 [2024-11-06 07:59:03.437958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:40.929 [2024-11-06 07:59:03.437973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.929 [2024-11-06 07:59:03.437988] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:40.929 [2024-11-06 07:59:03.438006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:40.929 [2024-11-06 07:59:03.438023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:40.929 [2024-11-06 07:59:03.438039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:40.929 [2024-11-06 07:59:03.438062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:40.929 [2024-11-06 07:59:03.438078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:40.929 [2024-11-06 07:59:03.438096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:40.929 [2024-11-06 07:59:03.438111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:40.929 [2024-11-06 07:59:03.438126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:40.929 [2024-11-06 07:59:03.438142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:40.929 [2024-11-06 07:59:03.438159] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:40.929 [2024-11-06 07:59:03.438180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:40.929 [2024-11-06 07:59:03.438198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:40.929 [2024-11-06 07:59:03.438215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:40.929 [2024-11-06 07:59:03.438235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:40.929 [2024-11-06 07:59:03.438270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:40.929 [2024-11-06 07:59:03.438290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:40.929 [2024-11-06 07:59:03.438307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:40.929 [2024-11-06 07:59:03.438324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:40.929 [2024-11-06 07:59:03.438341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:40.929 [2024-11-06 07:59:03.438357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:40.929 [2024-11-06 07:59:03.438373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:40.929 [2024-11-06 07:59:03.438390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:40.929 [2024-11-06 07:59:03.438407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:40.929 [2024-11-06 07:59:03.438423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:40.929 [2024-11-06 07:59:03.438440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:40.929 [2024-11-06 07:59:03.438456] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:40.929 [2024-11-06 07:59:03.438489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:40.929 [2024-11-06 07:59:03.438508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:40.929 [2024-11-06 07:59:03.438533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:40.929 [2024-11-06 07:59:03.438550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:40.929 [2024-11-06 07:59:03.438566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:40.929 [2024-11-06 07:59:03.438585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-06 07:59:03.438601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:40.929 [2024-11-06 07:59:03.438618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:22:40.929 [2024-11-06 07:59:03.438640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-06 07:59:03.479195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-06 07:59:03.479291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:40.929 [2024-11-06 07:59:03.479314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.453 ms 00:22:40.929 [2024-11-06 07:59:03.479328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-06 07:59:03.479558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-06 07:59:03.479580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:40.929 [2024-11-06 07:59:03.479602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:40.929 [2024-11-06 07:59:03.479615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-06 07:59:03.536868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-06 07:59:03.537152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.929 [2024-11-06 07:59:03.537188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.218 ms 00:22:40.929 [2024-11-06 07:59:03.537203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-06 07:59:03.537429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-06 07:59:03.537453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.929 [2024-11-06 07:59:03.537479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:40.929 [2024-11-06 07:59:03.537491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-06 07:59:03.538095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-06 07:59:03.538116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.929 [2024-11-06 07:59:03.538130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:22:40.929 [2024-11-06 07:59:03.538142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.929 [2024-11-06 07:59:03.538348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.929 [2024-11-06 07:59:03.538370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.929 [2024-11-06 07:59:03.538384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:22:40.929 [2024-11-06 07:59:03.538396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.196 [2024-11-06 07:59:03.558798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.196 [2024-11-06 07:59:03.558861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:41.196 [2024-11-06 07:59:03.558883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.365 ms 00:22:41.196 [2024-11-06 07:59:03.558897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.196 [2024-11-06 07:59:03.576104] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:41.196 [2024-11-06 07:59:03.576196] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:41.196 [2024-11-06 07:59:03.576220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.196 [2024-11-06 07:59:03.576233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:41.196 [2024-11-06 07:59:03.576264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.109 ms 00:22:41.196 [2024-11-06 07:59:03.576280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.196 [2024-11-06 07:59:03.606880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.196 [2024-11-06 07:59:03.607153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:41.196 [2024-11-06 07:59:03.607208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.423 ms 00:22:41.196 [2024-11-06 07:59:03.607222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.196 [2024-11-06 07:59:03.627070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.196 [2024-11-06 07:59:03.627156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:41.197 [2024-11-06 07:59:03.627178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.635 ms 00:22:41.197 [2024-11-06 07:59:03.627191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.643807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.643884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:41.197 [2024-11-06 07:59:03.643905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.406 ms 00:22:41.197 [2024-11-06 07:59:03.643919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.644992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.645048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:41.197 [2024-11-06 07:59:03.645066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:22:41.197 [2024-11-06 07:59:03.645078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.726781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.726877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:41.197 [2024-11-06 07:59:03.726900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.661 ms 00:22:41.197 [2024-11-06 07:59:03.726912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.743968] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:41.197 [2024-11-06 07:59:03.766312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.766401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:41.197 [2024-11-06 07:59:03.766423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.182 ms 00:22:41.197 [2024-11-06 07:59:03.766436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.766622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.766643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:41.197 [2024-11-06 07:59:03.766662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:41.197 [2024-11-06 07:59:03.766675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.766761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.766787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:41.197 [2024-11-06 07:59:03.766802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:41.197 [2024-11-06 07:59:03.766814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.766865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.766889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:41.197 [2024-11-06 07:59:03.766902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:41.197 [2024-11-06 07:59:03.766919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.766969] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:41.197 [2024-11-06 07:59:03.766986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.766998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:41.197 [2024-11-06 07:59:03.767012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:41.197 [2024-11-06 07:59:03.767024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.801060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.801151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:41.197 [2024-11-06 07:59:03.801189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.998 ms 00:22:41.197 [2024-11-06 07:59:03.801202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.801453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.197 [2024-11-06 07:59:03.801477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:41.197 [2024-11-06 07:59:03.801492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:41.197 [2024-11-06 07:59:03.801504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.197 [2024-11-06 07:59:03.802761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:41.197 [2024-11-06 07:59:03.808280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.649 ms, result 0 00:22:41.197 [2024-11-06 07:59:03.809372] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:41.486 [2024-11-06 07:59:03.828052] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:42.421  [2024-11-06T07:59:05.984Z] Copying: 21/256 [MB] (21 MBps) [2024-11-06T07:59:06.918Z] Copying: 44/256 [MB] (22 MBps) [2024-11-06T07:59:07.853Z] Copying: 67/256 [MB] (23 MBps) [2024-11-06T07:59:09.229Z] Copying: 90/256 [MB] (22 MBps) [2024-11-06T07:59:10.164Z] Copying: 112/256 [MB] (22 MBps) [2024-11-06T07:59:11.100Z] Copying: 134/256 [MB] (22 MBps) [2024-11-06T07:59:12.034Z] Copying: 157/256 [MB] (22 MBps) [2024-11-06T07:59:12.984Z] Copying: 180/256 [MB] (22 MBps) [2024-11-06T07:59:13.920Z] Copying: 202/256 [MB] (22 MBps) [2024-11-06T07:59:14.855Z] Copying: 225/256 [MB] (22 MBps) [2024-11-06T07:59:15.422Z] Copying: 247/256 [MB] (22 MBps) [2024-11-06T07:59:15.422Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-06 07:59:15.216026] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:52.793 [2024-11-06 07:59:15.228823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.228885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:52.793 [2024-11-06 07:59:15.228923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:52.793 [2024-11-06 07:59:15.228935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.228971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:52.793 [2024-11-06 07:59:15.232605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.232647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:52.793 [2024-11-06 07:59:15.232678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:22:52.793 [2024-11-06 07:59:15.232690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.234735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.234777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:52.793 [2024-11-06 07:59:15.234810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.013 ms 00:22:52.793 [2024-11-06 07:59:15.234822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.242034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.242076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:52.793 [2024-11-06 07:59:15.242108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.186 ms 00:22:52.793 [2024-11-06 07:59:15.242130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.249548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.249612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:52.793 [2024-11-06 07:59:15.249629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.348 ms 00:22:52.793 [2024-11-06 07:59:15.249641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.282541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.282601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:52.793 [2024-11-06 07:59:15.282622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.829 ms 00:22:52.793 [2024-11-06 07:59:15.282634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.301347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.301605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:52.793 [2024-11-06 07:59:15.301637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.601 ms 00:22:52.793 [2024-11-06 07:59:15.301665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.301904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.301927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:52.793 [2024-11-06 07:59:15.301941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:52.793 [2024-11-06 07:59:15.301953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.334311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.334390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:52.793 [2024-11-06 07:59:15.334411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.330 ms 00:22:52.793 [2024-11-06 07:59:15.334423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.365524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.365791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:52.793 [2024-11-06 07:59:15.365823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.002 ms 00:22:52.793 [2024-11-06 07:59:15.365836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.793 [2024-11-06 07:59:15.396905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.793 [2024-11-06 07:59:15.396976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:52.793 [2024-11-06 07:59:15.397014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.962 ms 00:22:52.793 [2024-11-06 07:59:15.397036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.052 [2024-11-06 07:59:15.427675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.053 [2024-11-06 07:59:15.427746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:53.053 [2024-11-06 07:59:15.427767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.498 ms 00:22:53.053 [2024-11-06 07:59:15.427779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.053 [2024-11-06 07:59:15.427872] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:53.053 [2024-11-06 07:59:15.427900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.427926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.427939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.427951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.427964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.427977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.427989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:53.053 [2024-11-06 07:59:15.428978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.428991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:53.054 [2024-11-06 07:59:15.429223] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:53.054 [2024-11-06 07:59:15.429236] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a89fd864-aa96-428e-abd4-47c1715fad37 00:22:53.054 [2024-11-06 07:59:15.429259] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:53.054 [2024-11-06 07:59:15.429273] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:53.054 [2024-11-06 07:59:15.429284] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:53.054 [2024-11-06 07:59:15.429296] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:53.054 [2024-11-06 07:59:15.429307] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:53.054 [2024-11-06 07:59:15.429319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:53.054 [2024-11-06 07:59:15.429330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:53.054 [2024-11-06 07:59:15.429341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:53.054 [2024-11-06 07:59:15.429351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:53.054 [2024-11-06 07:59:15.429363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.054 [2024-11-06 07:59:15.429375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:53.054 [2024-11-06 07:59:15.429388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:22:53.054 [2024-11-06 07:59:15.429405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.054 [2024-11-06 07:59:15.446289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.054 [2024-11-06 07:59:15.446347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:53.054 [2024-11-06 07:59:15.446365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.854 ms 00:22:53.054 [2024-11-06 07:59:15.446377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.054 [2024-11-06 07:59:15.446889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.054 [2024-11-06 07:59:15.446912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:53.054 [2024-11-06 07:59:15.446936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:22:53.054 [2024-11-06 07:59:15.446949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.054 [2024-11-06 07:59:15.494334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.054 [2024-11-06 07:59:15.494412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.054 [2024-11-06 07:59:15.494433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.054 [2024-11-06 07:59:15.494445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.054 [2024-11-06 07:59:15.494610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.054 [2024-11-06 07:59:15.494629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.054 [2024-11-06 07:59:15.494647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.054 [2024-11-06 07:59:15.494659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.054 [2024-11-06 07:59:15.494730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.054 [2024-11-06 07:59:15.494749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.054 [2024-11-06 07:59:15.494763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.054 [2024-11-06 07:59:15.494779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.054 [2024-11-06 07:59:15.494806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.054 [2024-11-06 07:59:15.494820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.054 [2024-11-06 07:59:15.494832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.054 [2024-11-06 07:59:15.494850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.054 [2024-11-06 07:59:15.606813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.054 [2024-11-06 07:59:15.606893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.054 [2024-11-06 07:59:15.606914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.054 [2024-11-06 07:59:15.606927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.313 [2024-11-06 07:59:15.695564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.313 [2024-11-06 07:59:15.695645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.313 [2024-11-06 07:59:15.695695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.313 [2024-11-06 07:59:15.695708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.313 [2024-11-06 07:59:15.695799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.313 [2024-11-06 07:59:15.695816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.313 [2024-11-06 07:59:15.695829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.313 [2024-11-06 07:59:15.695841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.313 [2024-11-06 07:59:15.695879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.313 [2024-11-06 07:59:15.695894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.313 [2024-11-06 07:59:15.695907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.313 [2024-11-06 07:59:15.695918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.313 [2024-11-06 07:59:15.696060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.313 [2024-11-06 07:59:15.696080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.313 [2024-11-06 07:59:15.696094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.313 [2024-11-06 07:59:15.696106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.313 [2024-11-06 07:59:15.696157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.313 [2024-11-06 07:59:15.696176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:53.313 [2024-11-06 07:59:15.696188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.313 [2024-11-06 07:59:15.696200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.313 [2024-11-06 07:59:15.696257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.313 [2024-11-06 07:59:15.696303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.313 [2024-11-06 07:59:15.696317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.313 [2024-11-06 07:59:15.696329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.313 [2024-11-06 07:59:15.696387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.313 [2024-11-06 07:59:15.696404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.313 [2024-11-06 07:59:15.696417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.313 [2024-11-06 07:59:15.696428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.313 [2024-11-06 07:59:15.696608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 467.787 ms, result 0 00:22:54.248 00:22:54.248 00:22:54.248 07:59:16 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:54.248 07:59:16 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76222 00:22:54.248 07:59:16 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76222 00:22:54.249 07:59:16 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76222 ']' 00:22:54.249 07:59:16 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.249 07:59:16 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.249 07:59:16 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.249 07:59:16 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.249 07:59:16 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:54.507 [2024-11-06 07:59:16.906235] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:22:54.507 [2024-11-06 07:59:16.906686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76222 ] 00:22:54.507 [2024-11-06 07:59:17.082968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.766 [2024-11-06 07:59:17.213489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.700 07:59:18 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:55.700 07:59:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:22:55.700 07:59:18 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:55.959 [2024-11-06 07:59:18.370938] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.959 [2024-11-06 07:59:18.371023] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:55.959 [2024-11-06 07:59:18.554602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.959 [2024-11-06 07:59:18.554699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:55.959 [2024-11-06 07:59:18.554729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:55.959 [2024-11-06 07:59:18.554743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.959 [2024-11-06 07:59:18.558731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.959 [2024-11-06 07:59:18.558778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.959 [2024-11-06 07:59:18.558817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.957 ms 00:22:55.959 [2024-11-06 07:59:18.558830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.959 [2024-11-06 07:59:18.558981] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:55.959 [2024-11-06 07:59:18.559970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:55.959 [2024-11-06 07:59:18.560018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.959 [2024-11-06 07:59:18.560034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.959 [2024-11-06 07:59:18.560048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:22:55.959 [2024-11-06 07:59:18.560060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.959 [2024-11-06 07:59:18.562152] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:55.959 [2024-11-06 07:59:18.579328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.959 [2024-11-06 07:59:18.579430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:55.959 [2024-11-06 07:59:18.579453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.181 ms 00:22:55.959 [2024-11-06 07:59:18.579469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.959 [2024-11-06 07:59:18.579652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.959 [2024-11-06 07:59:18.579678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:55.959 [2024-11-06 07:59:18.579692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:55.959 [2024-11-06 07:59:18.579706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.228 [2024-11-06 07:59:18.588831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.228 [2024-11-06 07:59:18.588930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.228 [2024-11-06 07:59:18.588953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.048 ms 00:22:56.228 [2024-11-06 07:59:18.588975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.228 [2024-11-06 07:59:18.589212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.228 [2024-11-06 07:59:18.589238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.228 [2024-11-06 07:59:18.589277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:22:56.228 [2024-11-06 07:59:18.589295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.228 [2024-11-06 07:59:18.589352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.228 [2024-11-06 07:59:18.589377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:56.228 [2024-11-06 07:59:18.589389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:22:56.228 [2024-11-06 07:59:18.589404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.228 [2024-11-06 07:59:18.589443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:56.228 [2024-11-06 07:59:18.595242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.228 [2024-11-06 07:59:18.595304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.228 [2024-11-06 07:59:18.595326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.806 ms 00:22:56.229 [2024-11-06 07:59:18.595338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.229 [2024-11-06 07:59:18.595468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.229 [2024-11-06 07:59:18.595488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:56.229 [2024-11-06 07:59:18.595504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:56.229 [2024-11-06 07:59:18.595515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.229 [2024-11-06 07:59:18.595553] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:56.229 [2024-11-06 07:59:18.595583] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:56.229 [2024-11-06 07:59:18.595639] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:56.229 [2024-11-06 07:59:18.595663] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:56.229 [2024-11-06 07:59:18.595782] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:56.229 [2024-11-06 07:59:18.595798] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:56.229 [2024-11-06 07:59:18.595819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:56.229 [2024-11-06 07:59:18.595834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:56.229 [2024-11-06 07:59:18.595873] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:56.229 [2024-11-06 07:59:18.595885] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:56.229 [2024-11-06 07:59:18.595898] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:56.229 [2024-11-06 07:59:18.595909] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:56.229 [2024-11-06 07:59:18.595925] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:56.229 [2024-11-06 07:59:18.595938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.229 [2024-11-06 07:59:18.595952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:56.229 [2024-11-06 07:59:18.595963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:22:56.229 [2024-11-06 07:59:18.595977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.229 [2024-11-06 07:59:18.596076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.229 [2024-11-06 07:59:18.596096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:56.229 [2024-11-06 07:59:18.596107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:56.229 [2024-11-06 07:59:18.596120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.229 [2024-11-06 07:59:18.596237] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:56.229 [2024-11-06 07:59:18.596272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:56.229 [2024-11-06 07:59:18.596288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:56.229 [2024-11-06 07:59:18.596327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:56.229 [2024-11-06 07:59:18.596366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.229 [2024-11-06 07:59:18.596389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:56.229 [2024-11-06 07:59:18.596404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:56.229 [2024-11-06 07:59:18.596415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.229 [2024-11-06 07:59:18.596429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:56.229 [2024-11-06 07:59:18.596439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:56.229 [2024-11-06 07:59:18.596452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:56.229 [2024-11-06 07:59:18.596476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:56.229 [2024-11-06 07:59:18.596522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:56.229 [2024-11-06 07:59:18.596562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:56.229 [2024-11-06 07:59:18.596596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:56.229 [2024-11-06 07:59:18.596632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:56.229 [2024-11-06 07:59:18.596668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.229 [2024-11-06 07:59:18.596691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:56.229 [2024-11-06 07:59:18.596703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:56.229 [2024-11-06 07:59:18.596713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.229 [2024-11-06 07:59:18.596726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:56.229 [2024-11-06 07:59:18.596737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:56.229 [2024-11-06 07:59:18.596752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:56.229 [2024-11-06 07:59:18.596775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:56.229 [2024-11-06 07:59:18.596786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596799] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:56.229 [2024-11-06 07:59:18.596812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:56.229 [2024-11-06 07:59:18.596825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.229 [2024-11-06 07:59:18.596853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:56.229 [2024-11-06 07:59:18.596864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:56.229 [2024-11-06 07:59:18.596877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:56.229 [2024-11-06 07:59:18.596888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:56.229 [2024-11-06 07:59:18.596900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:56.229 [2024-11-06 07:59:18.596911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:56.229 [2024-11-06 07:59:18.596926] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:56.229 [2024-11-06 07:59:18.596940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.229 [2024-11-06 07:59:18.596959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:56.229 [2024-11-06 07:59:18.596971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:56.229 [2024-11-06 07:59:18.596985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:56.229 [2024-11-06 07:59:18.596996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:56.229 [2024-11-06 07:59:18.597010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:56.229 [2024-11-06 07:59:18.597033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:56.229 [2024-11-06 07:59:18.597050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:56.229 [2024-11-06 07:59:18.597062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:56.229 [2024-11-06 07:59:18.597075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:56.229 [2024-11-06 07:59:18.597088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:56.229 [2024-11-06 07:59:18.597101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:56.229 [2024-11-06 07:59:18.597114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:56.229 [2024-11-06 07:59:18.597129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:56.229 [2024-11-06 07:59:18.597141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:56.229 [2024-11-06 07:59:18.597154] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:56.229 [2024-11-06 07:59:18.597167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.230 [2024-11-06 07:59:18.597185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:56.230 [2024-11-06 07:59:18.597196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:56.230 [2024-11-06 07:59:18.597210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:56.230 [2024-11-06 07:59:18.597222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:56.230 [2024-11-06 07:59:18.597238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.597272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:56.230 [2024-11-06 07:59:18.597288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:22:56.230 [2024-11-06 07:59:18.597299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.637820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.637895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.230 [2024-11-06 07:59:18.637938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.432 ms 00:22:56.230 [2024-11-06 07:59:18.637951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.638157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.638180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:56.230 [2024-11-06 07:59:18.638197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:56.230 [2024-11-06 07:59:18.638209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.683862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.683937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.230 [2024-11-06 07:59:18.683965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.614 ms 00:22:56.230 [2024-11-06 07:59:18.683982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.684146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.684165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.230 [2024-11-06 07:59:18.684182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:56.230 [2024-11-06 07:59:18.684193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.684817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.684842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.230 [2024-11-06 07:59:18.684860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:22:56.230 [2024-11-06 07:59:18.684874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.685071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.685090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.230 [2024-11-06 07:59:18.685105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:22:56.230 [2024-11-06 07:59:18.685116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.706909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.706982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.230 [2024-11-06 07:59:18.707007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.756 ms 00:22:56.230 [2024-11-06 07:59:18.707020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.724295] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:56.230 [2024-11-06 07:59:18.724371] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:56.230 [2024-11-06 07:59:18.724418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.724433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:56.230 [2024-11-06 07:59:18.724455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.199 ms 00:22:56.230 [2024-11-06 07:59:18.724468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.754735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.754840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:56.230 [2024-11-06 07:59:18.754866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.079 ms 00:22:56.230 [2024-11-06 07:59:18.754880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.772106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.772196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:56.230 [2024-11-06 07:59:18.772240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.041 ms 00:22:56.230 [2024-11-06 07:59:18.772253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.788133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.788240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:56.230 [2024-11-06 07:59:18.788300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.703 ms 00:22:56.230 [2024-11-06 07:59:18.788313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.230 [2024-11-06 07:59:18.789356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.230 [2024-11-06 07:59:18.789519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:56.230 [2024-11-06 07:59:18.789552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:22:56.230 [2024-11-06 07:59:18.789566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.901511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.489 [2024-11-06 07:59:18.901610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:56.489 [2024-11-06 07:59:18.901643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.889 ms 00:22:56.489 [2024-11-06 07:59:18.901659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.920065] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:56.489 [2024-11-06 07:59:18.944441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.489 [2024-11-06 07:59:18.944562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:56.489 [2024-11-06 07:59:18.944589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.543 ms 00:22:56.489 [2024-11-06 07:59:18.944613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.944789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.489 [2024-11-06 07:59:18.944819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:56.489 [2024-11-06 07:59:18.944836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:56.489 [2024-11-06 07:59:18.944854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.944940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.489 [2024-11-06 07:59:18.944964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:56.489 [2024-11-06 07:59:18.944980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:56.489 [2024-11-06 07:59:18.944998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.945066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.489 [2024-11-06 07:59:18.945089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:56.489 [2024-11-06 07:59:18.945105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:56.489 [2024-11-06 07:59:18.945122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.945185] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:56.489 [2024-11-06 07:59:18.945221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.489 [2024-11-06 07:59:18.945238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:56.489 [2024-11-06 07:59:18.945297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:56.489 [2024-11-06 07:59:18.945323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.985484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.489 [2024-11-06 07:59:18.985584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:56.489 [2024-11-06 07:59:18.985632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.088 ms 00:22:56.489 [2024-11-06 07:59:18.985650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.985892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.489 [2024-11-06 07:59:18.985920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:56.489 [2024-11-06 07:59:18.985945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:56.489 [2024-11-06 07:59:18.985961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.489 [2024-11-06 07:59:18.987409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:56.489 [2024-11-06 07:59:18.993247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 432.348 ms, result 0 00:22:56.489 [2024-11-06 07:59:18.994669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:56.489 Some configs were skipped because the RPC state that can call them passed over. 00:22:56.489 07:59:19 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:56.748 [2024-11-06 07:59:19.335893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.748 [2024-11-06 07:59:19.336222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:56.748 [2024-11-06 07:59:19.336385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:22:56.748 [2024-11-06 07:59:19.336555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.748 [2024-11-06 07:59:19.336663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.442 ms, result 0 00:22:56.748 true 00:22:56.748 07:59:19 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:57.314 [2024-11-06 07:59:19.643797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.314 [2024-11-06 07:59:19.644044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:57.314 [2024-11-06 07:59:19.644083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:22:57.314 [2024-11-06 07:59:19.644097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.314 [2024-11-06 07:59:19.644165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.577 ms, result 0 00:22:57.314 true 00:22:57.314 07:59:19 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76222 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76222 ']' 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76222 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76222 00:22:57.314 killing process with pid 76222 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76222' 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76222 00:22:57.314 07:59:19 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76222 00:22:58.253 [2024-11-06 07:59:20.718804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.253 [2024-11-06 07:59:20.718888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:58.253 [2024-11-06 07:59:20.718911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:58.253 [2024-11-06 07:59:20.718926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.253 [2024-11-06 07:59:20.718959] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:58.253 [2024-11-06 07:59:20.722610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.253 [2024-11-06 07:59:20.722649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:58.253 [2024-11-06 07:59:20.722673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.622 ms 00:22:58.253 [2024-11-06 07:59:20.722685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.253 [2024-11-06 07:59:20.723007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.253 [2024-11-06 07:59:20.723028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:58.253 [2024-11-06 07:59:20.723043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:22:58.253 [2024-11-06 07:59:20.723055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.253 [2024-11-06 07:59:20.727372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.253 [2024-11-06 07:59:20.727417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:58.254 [2024-11-06 07:59:20.727437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.287 ms 00:22:58.254 [2024-11-06 07:59:20.727452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.734809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.254 [2024-11-06 07:59:20.734854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:58.254 [2024-11-06 07:59:20.734877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.300 ms 00:22:58.254 [2024-11-06 07:59:20.734890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.748006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.254 [2024-11-06 07:59:20.748073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:58.254 [2024-11-06 07:59:20.748100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.031 ms 00:22:58.254 [2024-11-06 07:59:20.748127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.757612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.254 [2024-11-06 07:59:20.757923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:58.254 [2024-11-06 07:59:20.757965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.398 ms 00:22:58.254 [2024-11-06 07:59:20.757983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.758170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.254 [2024-11-06 07:59:20.758192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:58.254 [2024-11-06 07:59:20.758208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:58.254 [2024-11-06 07:59:20.758220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.771989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.254 [2024-11-06 07:59:20.772049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:58.254 [2024-11-06 07:59:20.772071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.539 ms 00:22:58.254 [2024-11-06 07:59:20.772083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.785063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.254 [2024-11-06 07:59:20.785123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:58.254 [2024-11-06 07:59:20.785149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.898 ms 00:22:58.254 [2024-11-06 07:59:20.785162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.797633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.254 [2024-11-06 07:59:20.797698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:58.254 [2024-11-06 07:59:20.797721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.395 ms 00:22:58.254 [2024-11-06 07:59:20.797733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.810332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.254 [2024-11-06 07:59:20.810392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:58.254 [2024-11-06 07:59:20.810414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:22:58.254 [2024-11-06 07:59:20.810426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.254 [2024-11-06 07:59:20.810483] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:58.254 [2024-11-06 07:59:20.810510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.810993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:58.254 [2024-11-06 07:59:20.811425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.811982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:58.255 [2024-11-06 07:59:20.812005] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:58.255 [2024-11-06 07:59:20.812022] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a89fd864-aa96-428e-abd4-47c1715fad37 00:22:58.255 [2024-11-06 07:59:20.812119] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:58.255 [2024-11-06 07:59:20.812139] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:58.255 [2024-11-06 07:59:20.812155] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:58.255 [2024-11-06 07:59:20.812169] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:58.255 [2024-11-06 07:59:20.812180] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:58.255 [2024-11-06 07:59:20.812194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:58.255 [2024-11-06 07:59:20.812206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:58.255 [2024-11-06 07:59:20.812219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:58.255 [2024-11-06 07:59:20.812230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:58.255 [2024-11-06 07:59:20.812244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.255 [2024-11-06 07:59:20.812272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:58.255 [2024-11-06 07:59:20.812288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.765 ms 00:22:58.255 [2024-11-06 07:59:20.812300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.255 [2024-11-06 07:59:20.830003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.255 [2024-11-06 07:59:20.830207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:58.255 [2024-11-06 07:59:20.830369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.641 ms 00:22:58.255 [2024-11-06 07:59:20.830424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.255 [2024-11-06 07:59:20.831143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.255 [2024-11-06 07:59:20.831296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:58.255 [2024-11-06 07:59:20.831418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:22:58.255 [2024-11-06 07:59:20.831572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:20.892010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:20.892321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.514 [2024-11-06 07:59:20.892467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:20.892525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:20.892802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:20.892937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.514 [2024-11-06 07:59:20.893082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:20.893215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:20.893403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:20.893463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.514 [2024-11-06 07:59:20.893596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:20.893718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:20.893802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:20.893917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.514 [2024-11-06 07:59:20.894037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:20.894094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.006472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:21.006779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.514 [2024-11-06 07:59:21.006912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:21.007038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.096454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:21.096727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.514 [2024-11-06 07:59:21.096858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:21.096911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.097156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:21.097314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.514 [2024-11-06 07:59:21.097453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:21.097515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.097671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:21.097739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.514 [2024-11-06 07:59:21.097936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:21.097962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.098127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:21.098163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.514 [2024-11-06 07:59:21.098185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:21.098198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.098281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:21.098302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:58.514 [2024-11-06 07:59:21.098321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:21.098334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.098393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:21.098416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.514 [2024-11-06 07:59:21.098440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:21.098453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.098523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:58.514 [2024-11-06 07:59:21.098542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.514 [2024-11-06 07:59:21.098561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:58.514 [2024-11-06 07:59:21.098574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.514 [2024-11-06 07:59:21.098768] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 379.932 ms, result 0 00:22:59.889 07:59:22 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:59.889 07:59:22 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:59.889 [2024-11-06 07:59:22.195609] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:22:59.889 [2024-11-06 07:59:22.195800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76290 ] 00:22:59.889 [2024-11-06 07:59:22.383640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.147 [2024-11-06 07:59:22.517446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.405 [2024-11-06 07:59:22.884043] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.405 [2024-11-06 07:59:22.884133] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.664 [2024-11-06 07:59:23.050494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.050574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:00.664 [2024-11-06 07:59:23.050595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:00.664 [2024-11-06 07:59:23.050608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.054202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.054417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:00.664 [2024-11-06 07:59:23.054447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.553 ms 00:23:00.664 [2024-11-06 07:59:23.054462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.054651] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:00.664 [2024-11-06 07:59:23.055627] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:00.664 [2024-11-06 07:59:23.055666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.055681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:00.664 [2024-11-06 07:59:23.055695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:23:00.664 [2024-11-06 07:59:23.055707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.057812] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:00.664 [2024-11-06 07:59:23.074844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.074909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:00.664 [2024-11-06 07:59:23.074936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.031 ms 00:23:00.664 [2024-11-06 07:59:23.074949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.075104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.075126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:00.664 [2024-11-06 07:59:23.075140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:00.664 [2024-11-06 07:59:23.075152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.084102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.084433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:00.664 [2024-11-06 07:59:23.084467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.882 ms 00:23:00.664 [2024-11-06 07:59:23.084481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.084665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.084688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:00.664 [2024-11-06 07:59:23.084702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:00.664 [2024-11-06 07:59:23.084714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.084759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.084780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:00.664 [2024-11-06 07:59:23.084798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:00.664 [2024-11-06 07:59:23.084810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.084845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:00.664 [2024-11-06 07:59:23.089897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.089937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:00.664 [2024-11-06 07:59:23.089952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.064 ms 00:23:00.664 [2024-11-06 07:59:23.089964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.090064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.090085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:00.664 [2024-11-06 07:59:23.090098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:00.664 [2024-11-06 07:59:23.090109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.090141] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:00.664 [2024-11-06 07:59:23.090171] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:00.664 [2024-11-06 07:59:23.090217] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:00.664 [2024-11-06 07:59:23.090238] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:00.664 [2024-11-06 07:59:23.090390] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:00.664 [2024-11-06 07:59:23.090409] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:00.664 [2024-11-06 07:59:23.090424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:00.664 [2024-11-06 07:59:23.090439] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:00.664 [2024-11-06 07:59:23.090452] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:00.664 [2024-11-06 07:59:23.090471] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:00.664 [2024-11-06 07:59:23.090482] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:00.664 [2024-11-06 07:59:23.090493] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:00.664 [2024-11-06 07:59:23.090505] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:00.664 [2024-11-06 07:59:23.090518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.090530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:00.664 [2024-11-06 07:59:23.090543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:23:00.664 [2024-11-06 07:59:23.090554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.090670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.664 [2024-11-06 07:59:23.090686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:00.664 [2024-11-06 07:59:23.090698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:00.664 [2024-11-06 07:59:23.090715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.664 [2024-11-06 07:59:23.090831] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:00.664 [2024-11-06 07:59:23.090848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:00.664 [2024-11-06 07:59:23.090860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.664 [2024-11-06 07:59:23.090872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.664 [2024-11-06 07:59:23.090884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:00.664 [2024-11-06 07:59:23.090895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:00.664 [2024-11-06 07:59:23.090906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:00.664 [2024-11-06 07:59:23.090918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:00.664 [2024-11-06 07:59:23.090929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:00.664 [2024-11-06 07:59:23.090940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.664 [2024-11-06 07:59:23.090950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:00.664 [2024-11-06 07:59:23.090960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:00.664 [2024-11-06 07:59:23.090971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.664 [2024-11-06 07:59:23.090996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:00.664 [2024-11-06 07:59:23.091007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:00.664 [2024-11-06 07:59:23.091017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.664 [2024-11-06 07:59:23.091028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:00.664 [2024-11-06 07:59:23.091039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:00.664 [2024-11-06 07:59:23.091049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.664 [2024-11-06 07:59:23.091060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:00.664 [2024-11-06 07:59:23.091073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:00.664 [2024-11-06 07:59:23.091084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.664 [2024-11-06 07:59:23.091095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:00.664 [2024-11-06 07:59:23.091105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:00.664 [2024-11-06 07:59:23.091115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.664 [2024-11-06 07:59:23.091126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:00.664 [2024-11-06 07:59:23.091136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:00.664 [2024-11-06 07:59:23.091147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.664 [2024-11-06 07:59:23.091158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:00.664 [2024-11-06 07:59:23.091168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:00.664 [2024-11-06 07:59:23.091179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.664 [2024-11-06 07:59:23.091189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:00.664 [2024-11-06 07:59:23.091199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:00.664 [2024-11-06 07:59:23.091210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.664 [2024-11-06 07:59:23.091220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:00.664 [2024-11-06 07:59:23.091231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:00.664 [2024-11-06 07:59:23.091241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.664 [2024-11-06 07:59:23.091252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:00.664 [2024-11-06 07:59:23.091262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:00.665 [2024-11-06 07:59:23.091272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.665 [2024-11-06 07:59:23.091631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:00.665 [2024-11-06 07:59:23.091679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:00.665 [2024-11-06 07:59:23.091718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.665 [2024-11-06 07:59:23.091848] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:00.665 [2024-11-06 07:59:23.091900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:00.665 [2024-11-06 07:59:23.091939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.665 [2024-11-06 07:59:23.091978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.665 [2024-11-06 07:59:23.092111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:00.665 [2024-11-06 07:59:23.092151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:00.665 [2024-11-06 07:59:23.092270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:00.665 [2024-11-06 07:59:23.092325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:00.665 [2024-11-06 07:59:23.092501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:00.665 [2024-11-06 07:59:23.092550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:00.665 [2024-11-06 07:59:23.092590] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:00.665 [2024-11-06 07:59:23.092752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.665 [2024-11-06 07:59:23.092822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:00.665 [2024-11-06 07:59:23.093015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:00.665 [2024-11-06 07:59:23.093103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:00.665 [2024-11-06 07:59:23.093225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:00.665 [2024-11-06 07:59:23.093261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:00.665 [2024-11-06 07:59:23.093276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:00.665 [2024-11-06 07:59:23.093289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:00.665 [2024-11-06 07:59:23.093301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:00.665 [2024-11-06 07:59:23.093312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:00.665 [2024-11-06 07:59:23.093324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:00.665 [2024-11-06 07:59:23.093335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:00.665 [2024-11-06 07:59:23.093346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:00.665 [2024-11-06 07:59:23.093358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:00.665 [2024-11-06 07:59:23.093370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:00.665 [2024-11-06 07:59:23.093382] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:00.665 [2024-11-06 07:59:23.093396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.665 [2024-11-06 07:59:23.093409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:00.665 [2024-11-06 07:59:23.093422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:00.665 [2024-11-06 07:59:23.093434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:00.665 [2024-11-06 07:59:23.093446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:00.665 [2024-11-06 07:59:23.093460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.093474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:00.665 [2024-11-06 07:59:23.093487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.699 ms 00:23:00.665 [2024-11-06 07:59:23.093507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.134033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.134364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:00.665 [2024-11-06 07:59:23.134399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.436 ms 00:23:00.665 [2024-11-06 07:59:23.134413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.134633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.134655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:00.665 [2024-11-06 07:59:23.134677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:00.665 [2024-11-06 07:59:23.134689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.186248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.186330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.665 [2024-11-06 07:59:23.186352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.523 ms 00:23:00.665 [2024-11-06 07:59:23.186365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.186557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.186579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.665 [2024-11-06 07:59:23.186593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:00.665 [2024-11-06 07:59:23.186605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.187170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.187196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.665 [2024-11-06 07:59:23.187212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:23:00.665 [2024-11-06 07:59:23.187223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.187438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.187461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.665 [2024-11-06 07:59:23.187474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:23:00.665 [2024-11-06 07:59:23.187486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.207648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.207714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.665 [2024-11-06 07:59:23.207736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.129 ms 00:23:00.665 [2024-11-06 07:59:23.207750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.224912] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:00.665 [2024-11-06 07:59:23.224971] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:00.665 [2024-11-06 07:59:23.224994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.225008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:00.665 [2024-11-06 07:59:23.225031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.038 ms 00:23:00.665 [2024-11-06 07:59:23.225046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.255107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.255214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:00.665 [2024-11-06 07:59:23.255237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.915 ms 00:23:00.665 [2024-11-06 07:59:23.255264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.272004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.272065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:00.665 [2024-11-06 07:59:23.272085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.588 ms 00:23:00.665 [2024-11-06 07:59:23.272097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.287671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.287903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:00.665 [2024-11-06 07:59:23.287936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.445 ms 00:23:00.665 [2024-11-06 07:59:23.287949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.665 [2024-11-06 07:59:23.288999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.665 [2024-11-06 07:59:23.289054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:00.665 [2024-11-06 07:59:23.289072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:23:00.665 [2024-11-06 07:59:23.289084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.923 [2024-11-06 07:59:23.368888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.923 [2024-11-06 07:59:23.368968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:00.923 [2024-11-06 07:59:23.368991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.764 ms 00:23:00.923 [2024-11-06 07:59:23.369005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.924 [2024-11-06 07:59:23.384263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:00.924 [2024-11-06 07:59:23.405754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.924 [2024-11-06 07:59:23.405837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:00.924 [2024-11-06 07:59:23.405860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.554 ms 00:23:00.924 [2024-11-06 07:59:23.405872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.924 [2024-11-06 07:59:23.406047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.924 [2024-11-06 07:59:23.406074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:00.924 [2024-11-06 07:59:23.406088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:00.924 [2024-11-06 07:59:23.406100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.924 [2024-11-06 07:59:23.406178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.924 [2024-11-06 07:59:23.406195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:00.924 [2024-11-06 07:59:23.406208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:00.924 [2024-11-06 07:59:23.406220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.924 [2024-11-06 07:59:23.406299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.924 [2024-11-06 07:59:23.406321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:00.924 [2024-11-06 07:59:23.406339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:00.924 [2024-11-06 07:59:23.406365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.924 [2024-11-06 07:59:23.406415] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:00.924 [2024-11-06 07:59:23.406433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.924 [2024-11-06 07:59:23.406444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:00.924 [2024-11-06 07:59:23.406457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:00.924 [2024-11-06 07:59:23.406468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.924 [2024-11-06 07:59:23.438873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.924 [2024-11-06 07:59:23.438960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:00.924 [2024-11-06 07:59:23.438998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.373 ms 00:23:00.924 [2024-11-06 07:59:23.439013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.924 [2024-11-06 07:59:23.439199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.924 [2024-11-06 07:59:23.439222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:00.924 [2024-11-06 07:59:23.439236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:00.924 [2024-11-06 07:59:23.439282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.924 [2024-11-06 07:59:23.440609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:00.924 [2024-11-06 07:59:23.445186] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.742 ms, result 0 00:23:00.924 [2024-11-06 07:59:23.446128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:00.924 [2024-11-06 07:59:23.462748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:01.859  [2024-11-06T07:59:25.864Z] Copying: 26/256 [MB] (26 MBps) [2024-11-06T07:59:26.798Z] Copying: 49/256 [MB] (23 MBps) [2024-11-06T07:59:27.734Z] Copying: 72/256 [MB] (23 MBps) [2024-11-06T07:59:28.670Z] Copying: 95/256 [MB] (23 MBps) [2024-11-06T07:59:29.605Z] Copying: 118/256 [MB] (22 MBps) [2024-11-06T07:59:30.554Z] Copying: 140/256 [MB] (21 MBps) [2024-11-06T07:59:31.489Z] Copying: 162/256 [MB] (22 MBps) [2024-11-06T07:59:32.865Z] Copying: 185/256 [MB] (22 MBps) [2024-11-06T07:59:33.824Z] Copying: 207/256 [MB] (22 MBps) [2024-11-06T07:59:34.759Z] Copying: 230/256 [MB] (22 MBps) [2024-11-06T07:59:34.759Z] Copying: 253/256 [MB] (22 MBps) [2024-11-06T07:59:34.759Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-06 07:59:34.599531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:12.130 [2024-11-06 07:59:34.612292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.612357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:12.130 [2024-11-06 07:59:34.612378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:12.130 [2024-11-06 07:59:34.612391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.612451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:12.130 [2024-11-06 07:59:34.616116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.616153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:12.130 [2024-11-06 07:59:34.616170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.640 ms 00:23:12.130 [2024-11-06 07:59:34.616182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.616511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.616533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:12.130 [2024-11-06 07:59:34.616547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:23:12.130 [2024-11-06 07:59:34.616559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.620294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.620514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:12.130 [2024-11-06 07:59:34.620562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:23:12.130 [2024-11-06 07:59:34.620577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.627959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.628143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:12.130 [2024-11-06 07:59:34.628172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.343 ms 00:23:12.130 [2024-11-06 07:59:34.628185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.660778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.660853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:12.130 [2024-11-06 07:59:34.660873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.451 ms 00:23:12.130 [2024-11-06 07:59:34.660885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.679874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.679966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:12.130 [2024-11-06 07:59:34.680009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.884 ms 00:23:12.130 [2024-11-06 07:59:34.680030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.680217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.680236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:12.130 [2024-11-06 07:59:34.680273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:12.130 [2024-11-06 07:59:34.680305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.713276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.713350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:12.130 [2024-11-06 07:59:34.713371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.915 ms 00:23:12.130 [2024-11-06 07:59:34.713383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.130 [2024-11-06 07:59:34.745681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.130 [2024-11-06 07:59:34.745748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:12.130 [2024-11-06 07:59:34.745769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.162 ms 00:23:12.130 [2024-11-06 07:59:34.745781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.390 [2024-11-06 07:59:34.777392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.390 [2024-11-06 07:59:34.777468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:12.390 [2024-11-06 07:59:34.777488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.490 ms 00:23:12.390 [2024-11-06 07:59:34.777500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.390 [2024-11-06 07:59:34.808972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.390 [2024-11-06 07:59:34.809368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:12.390 [2024-11-06 07:59:34.809410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.294 ms 00:23:12.390 [2024-11-06 07:59:34.809424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.390 [2024-11-06 07:59:34.809543] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:12.390 [2024-11-06 07:59:34.809598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.809988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:12.390 [2024-11-06 07:59:34.810574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:12.391 [2024-11-06 07:59:34.810943] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:12.391 [2024-11-06 07:59:34.810955] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a89fd864-aa96-428e-abd4-47c1715fad37 00:23:12.391 [2024-11-06 07:59:34.810969] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:12.391 [2024-11-06 07:59:34.810981] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:12.391 [2024-11-06 07:59:34.810992] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:12.391 [2024-11-06 07:59:34.811004] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:12.391 [2024-11-06 07:59:34.811015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:12.391 [2024-11-06 07:59:34.811043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:12.391 [2024-11-06 07:59:34.811054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:12.391 [2024-11-06 07:59:34.811064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:12.391 [2024-11-06 07:59:34.811074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:12.391 [2024-11-06 07:59:34.811086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.391 [2024-11-06 07:59:34.811098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:12.391 [2024-11-06 07:59:34.811124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:23:12.391 [2024-11-06 07:59:34.811136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.391 [2024-11-06 07:59:34.829119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.391 [2024-11-06 07:59:34.829425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:12.391 [2024-11-06 07:59:34.829548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.951 ms 00:23:12.391 [2024-11-06 07:59:34.829599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.391 [2024-11-06 07:59:34.830275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.391 [2024-11-06 07:59:34.830442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:12.391 [2024-11-06 07:59:34.830573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:23:12.391 [2024-11-06 07:59:34.830625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.391 [2024-11-06 07:59:34.878502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.391 [2024-11-06 07:59:34.878868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:12.391 [2024-11-06 07:59:34.878998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.391 [2024-11-06 07:59:34.879049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.391 [2024-11-06 07:59:34.879316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.391 [2024-11-06 07:59:34.879459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:12.391 [2024-11-06 07:59:34.879567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.391 [2024-11-06 07:59:34.879680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.391 [2024-11-06 07:59:34.879800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.391 [2024-11-06 07:59:34.879918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:12.391 [2024-11-06 07:59:34.880039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.391 [2024-11-06 07:59:34.880093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.391 [2024-11-06 07:59:34.880324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.391 [2024-11-06 07:59:34.880381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:12.391 [2024-11-06 07:59:34.880550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.391 [2024-11-06 07:59:34.880605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.391 [2024-11-06 07:59:34.992445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.391 [2024-11-06 07:59:34.992769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:12.391 [2024-11-06 07:59:34.992893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.391 [2024-11-06 07:59:34.992951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.650 [2024-11-06 07:59:35.083574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.650 [2024-11-06 07:59:35.083905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:12.650 [2024-11-06 07:59:35.084054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.650 [2024-11-06 07:59:35.084107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.650 [2024-11-06 07:59:35.084321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.650 [2024-11-06 07:59:35.084439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:12.650 [2024-11-06 07:59:35.084569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.650 [2024-11-06 07:59:35.084622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.650 [2024-11-06 07:59:35.084777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.650 [2024-11-06 07:59:35.084907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:12.650 [2024-11-06 07:59:35.084932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.650 [2024-11-06 07:59:35.084965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.650 [2024-11-06 07:59:35.085143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.650 [2024-11-06 07:59:35.085167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:12.650 [2024-11-06 07:59:35.085182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.650 [2024-11-06 07:59:35.085193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.650 [2024-11-06 07:59:35.085274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.650 [2024-11-06 07:59:35.085296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:12.650 [2024-11-06 07:59:35.085310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.650 [2024-11-06 07:59:35.085323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.650 [2024-11-06 07:59:35.085389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.650 [2024-11-06 07:59:35.085405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:12.650 [2024-11-06 07:59:35.085418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.650 [2024-11-06 07:59:35.085430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.650 [2024-11-06 07:59:35.085496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:12.650 [2024-11-06 07:59:35.085514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:12.650 [2024-11-06 07:59:35.085527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:12.650 [2024-11-06 07:59:35.085551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.650 [2024-11-06 07:59:35.085750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 473.499 ms, result 0 00:23:13.585 00:23:13.585 00:23:13.585 07:59:36 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:13.585 07:59:36 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:14.151 07:59:36 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:14.410 [2024-11-06 07:59:36.791828] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:23:14.410 [2024-11-06 07:59:36.792004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76436 ] 00:23:14.410 [2024-11-06 07:59:36.975651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.669 [2024-11-06 07:59:37.106218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.929 [2024-11-06 07:59:37.468095] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.929 [2024-11-06 07:59:37.468177] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:15.189 [2024-11-06 07:59:37.636816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.636905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:15.189 [2024-11-06 07:59:37.636944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:15.189 [2024-11-06 07:59:37.636958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.640741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.640788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:15.189 [2024-11-06 07:59:37.640821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.741 ms 00:23:15.189 [2024-11-06 07:59:37.640833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.640970] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:15.189 [2024-11-06 07:59:37.642006] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:15.189 [2024-11-06 07:59:37.642057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.642083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:15.189 [2024-11-06 07:59:37.642105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.098 ms 00:23:15.189 [2024-11-06 07:59:37.642125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.645071] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:15.189 [2024-11-06 07:59:37.663178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.663243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:15.189 [2024-11-06 07:59:37.663306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.107 ms 00:23:15.189 [2024-11-06 07:59:37.663319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.663474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.663496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:15.189 [2024-11-06 07:59:37.663512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:15.189 [2024-11-06 07:59:37.663524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.675954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.676041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:15.189 [2024-11-06 07:59:37.676076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.357 ms 00:23:15.189 [2024-11-06 07:59:37.676089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.676330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.676358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:15.189 [2024-11-06 07:59:37.676372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:23:15.189 [2024-11-06 07:59:37.676384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.676432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.676449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:15.189 [2024-11-06 07:59:37.676468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:15.189 [2024-11-06 07:59:37.676480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.676518] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:15.189 [2024-11-06 07:59:37.682447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.682489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:15.189 [2024-11-06 07:59:37.682521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.943 ms 00:23:15.189 [2024-11-06 07:59:37.682533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.682613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.682644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:15.189 [2024-11-06 07:59:37.682658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:15.189 [2024-11-06 07:59:37.682670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.682705] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:15.189 [2024-11-06 07:59:37.682739] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:15.189 [2024-11-06 07:59:37.682791] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:15.189 [2024-11-06 07:59:37.682814] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:15.189 [2024-11-06 07:59:37.682932] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:15.189 [2024-11-06 07:59:37.682963] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:15.189 [2024-11-06 07:59:37.682979] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:15.189 [2024-11-06 07:59:37.682994] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:15.189 [2024-11-06 07:59:37.683008] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:15.189 [2024-11-06 07:59:37.683026] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:15.189 [2024-11-06 07:59:37.683038] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:15.189 [2024-11-06 07:59:37.683050] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:15.189 [2024-11-06 07:59:37.683062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:15.189 [2024-11-06 07:59:37.683074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.683086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:15.189 [2024-11-06 07:59:37.683098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:23:15.189 [2024-11-06 07:59:37.683109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.683208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.189 [2024-11-06 07:59:37.683224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:15.189 [2024-11-06 07:59:37.683236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:15.189 [2024-11-06 07:59:37.683252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.189 [2024-11-06 07:59:37.683391] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:15.189 [2024-11-06 07:59:37.683413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:15.190 [2024-11-06 07:59:37.683426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:15.190 [2024-11-06 07:59:37.683438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:15.190 [2024-11-06 07:59:37.683463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:15.190 [2024-11-06 07:59:37.683484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:15.190 [2024-11-06 07:59:37.683494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:15.190 [2024-11-06 07:59:37.683515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:15.190 [2024-11-06 07:59:37.683525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:15.190 [2024-11-06 07:59:37.683535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:15.190 [2024-11-06 07:59:37.683563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:15.190 [2024-11-06 07:59:37.683575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:15.190 [2024-11-06 07:59:37.683588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:15.190 [2024-11-06 07:59:37.683610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:15.190 [2024-11-06 07:59:37.683638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:15.190 [2024-11-06 07:59:37.683660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.190 [2024-11-06 07:59:37.683682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:15.190 [2024-11-06 07:59:37.683692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.190 [2024-11-06 07:59:37.683713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:15.190 [2024-11-06 07:59:37.683723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.190 [2024-11-06 07:59:37.683744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:15.190 [2024-11-06 07:59:37.683755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:15.190 [2024-11-06 07:59:37.683776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:15.190 [2024-11-06 07:59:37.683786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:15.190 [2024-11-06 07:59:37.683808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:15.190 [2024-11-06 07:59:37.683818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:15.190 [2024-11-06 07:59:37.683829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:15.190 [2024-11-06 07:59:37.683840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:15.190 [2024-11-06 07:59:37.683851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:15.190 [2024-11-06 07:59:37.683861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:15.190 [2024-11-06 07:59:37.683885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:15.190 [2024-11-06 07:59:37.683896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683906] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:15.190 [2024-11-06 07:59:37.683919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:15.190 [2024-11-06 07:59:37.683930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:15.190 [2024-11-06 07:59:37.683943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:15.190 [2024-11-06 07:59:37.683963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:15.190 [2024-11-06 07:59:37.683975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:15.190 [2024-11-06 07:59:37.683986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:15.190 [2024-11-06 07:59:37.683997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:15.190 [2024-11-06 07:59:37.684008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:15.190 [2024-11-06 07:59:37.684019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:15.190 [2024-11-06 07:59:37.684033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:15.190 [2024-11-06 07:59:37.684063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:15.190 [2024-11-06 07:59:37.684076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:15.190 [2024-11-06 07:59:37.684088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:15.190 [2024-11-06 07:59:37.684099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:15.190 [2024-11-06 07:59:37.684111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:15.190 [2024-11-06 07:59:37.684122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:15.190 [2024-11-06 07:59:37.684133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:15.190 [2024-11-06 07:59:37.684144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:15.190 [2024-11-06 07:59:37.684155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:15.190 [2024-11-06 07:59:37.684166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:15.190 [2024-11-06 07:59:37.684177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:15.190 [2024-11-06 07:59:37.684188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:15.190 [2024-11-06 07:59:37.684199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:15.190 [2024-11-06 07:59:37.684209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:15.190 [2024-11-06 07:59:37.684221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:15.190 [2024-11-06 07:59:37.684233] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:15.190 [2024-11-06 07:59:37.684246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:15.190 [2024-11-06 07:59:37.684258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:15.190 [2024-11-06 07:59:37.684270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:15.190 [2024-11-06 07:59:37.684296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:15.190 [2024-11-06 07:59:37.684310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:15.190 [2024-11-06 07:59:37.684323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.190 [2024-11-06 07:59:37.684336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:15.190 [2024-11-06 07:59:37.684348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:23:15.190 [2024-11-06 07:59:37.684365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.190 [2024-11-06 07:59:37.731943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.190 [2024-11-06 07:59:37.732043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:15.190 [2024-11-06 07:59:37.732083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.489 ms 00:23:15.190 [2024-11-06 07:59:37.732096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.190 [2024-11-06 07:59:37.732388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.190 [2024-11-06 07:59:37.732416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:15.190 [2024-11-06 07:59:37.732432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:15.190 [2024-11-06 07:59:37.732444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.190 [2024-11-06 07:59:37.794494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.190 [2024-11-06 07:59:37.794582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:15.190 [2024-11-06 07:59:37.794638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.011 ms 00:23:15.190 [2024-11-06 07:59:37.794658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.190 [2024-11-06 07:59:37.794854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.190 [2024-11-06 07:59:37.794877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.190 [2024-11-06 07:59:37.794896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:15.190 [2024-11-06 07:59:37.794909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.190 [2024-11-06 07:59:37.795741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.190 [2024-11-06 07:59:37.795763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.190 [2024-11-06 07:59:37.795778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:23:15.190 [2024-11-06 07:59:37.795790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.190 [2024-11-06 07:59:37.795999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.190 [2024-11-06 07:59:37.796034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.190 [2024-11-06 07:59:37.796048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:23:15.191 [2024-11-06 07:59:37.796059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:37.819591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:37.819690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.450 [2024-11-06 07:59:37.819730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.495 ms 00:23:15.450 [2024-11-06 07:59:37.819743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:37.837814] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:15.450 [2024-11-06 07:59:37.838154] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:15.450 [2024-11-06 07:59:37.838187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:37.838202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:15.450 [2024-11-06 07:59:37.838220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.192 ms 00:23:15.450 [2024-11-06 07:59:37.838232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:37.869510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:37.869651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:15.450 [2024-11-06 07:59:37.869692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.064 ms 00:23:15.450 [2024-11-06 07:59:37.869707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:37.887758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:37.887843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:15.450 [2024-11-06 07:59:37.887882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.837 ms 00:23:15.450 [2024-11-06 07:59:37.887896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:37.904351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:37.904422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:15.450 [2024-11-06 07:59:37.904442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.270 ms 00:23:15.450 [2024-11-06 07:59:37.904455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:37.905604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:37.905764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:15.450 [2024-11-06 07:59:37.905793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:23:15.450 [2024-11-06 07:59:37.905807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:37.993961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:37.994413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:15.450 [2024-11-06 07:59:37.994541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.103 ms 00:23:15.450 [2024-11-06 07:59:37.994596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:38.009891] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:15.450 [2024-11-06 07:59:38.038696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:38.039104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:15.450 [2024-11-06 07:59:38.039242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.747 ms 00:23:15.450 [2024-11-06 07:59:38.039315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:38.039626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:38.039771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:15.450 [2024-11-06 07:59:38.039894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:15.450 [2024-11-06 07:59:38.040006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:38.040146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:38.040209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:15.450 [2024-11-06 07:59:38.040339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:15.450 [2024-11-06 07:59:38.040392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:38.040617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:38.040752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:15.450 [2024-11-06 07:59:38.040876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:15.450 [2024-11-06 07:59:38.040987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:38.041123] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:15.450 [2024-11-06 07:59:38.041272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:38.041329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:15.450 [2024-11-06 07:59:38.041439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:23:15.450 [2024-11-06 07:59:38.041489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:38.074859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:38.075180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:15.450 [2024-11-06 07:59:38.075331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.294 ms 00:23:15.450 [2024-11-06 07:59:38.075385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:38.075698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.450 [2024-11-06 07:59:38.075838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:15.450 [2024-11-06 07:59:38.075963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:15.450 [2024-11-06 07:59:38.076108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.450 [2024-11-06 07:59:38.077776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:15.709 [2024-11-06 07:59:38.082562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 440.454 ms, result 0 00:23:15.709 [2024-11-06 07:59:38.083638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:15.709 [2024-11-06 07:59:38.100534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:15.709  [2024-11-06T07:59:38.338Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-11-06 07:59:38.286506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:15.709 [2024-11-06 07:59:38.299882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.709 [2024-11-06 07:59:38.299950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:15.709 [2024-11-06 07:59:38.299972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:15.709 [2024-11-06 07:59:38.299986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.709 [2024-11-06 07:59:38.300039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:15.709 [2024-11-06 07:59:38.304083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.709 [2024-11-06 07:59:38.304126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:15.709 [2024-11-06 07:59:38.304143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:23:15.709 [2024-11-06 07:59:38.304155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.709 [2024-11-06 07:59:38.306270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.709 [2024-11-06 07:59:38.306314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:15.709 [2024-11-06 07:59:38.306332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:23:15.709 [2024-11-06 07:59:38.306345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.709 [2024-11-06 07:59:38.310234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.709 [2024-11-06 07:59:38.310288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:15.709 [2024-11-06 07:59:38.310320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.863 ms 00:23:15.709 [2024-11-06 07:59:38.310333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.709 [2024-11-06 07:59:38.317819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.709 [2024-11-06 07:59:38.318001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:15.709 [2024-11-06 07:59:38.318030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.441 ms 00:23:15.709 [2024-11-06 07:59:38.318043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.970 [2024-11-06 07:59:38.351013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.970 [2024-11-06 07:59:38.351101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:15.970 [2024-11-06 07:59:38.351124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.873 ms 00:23:15.970 [2024-11-06 07:59:38.351137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.970 [2024-11-06 07:59:38.370337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.970 [2024-11-06 07:59:38.370421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:15.970 [2024-11-06 07:59:38.370472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.098 ms 00:23:15.970 [2024-11-06 07:59:38.370496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.970 [2024-11-06 07:59:38.370730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.970 [2024-11-06 07:59:38.370752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:15.970 [2024-11-06 07:59:38.370767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:23:15.970 [2024-11-06 07:59:38.370780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.970 [2024-11-06 07:59:38.402976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.970 [2024-11-06 07:59:38.403066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:15.970 [2024-11-06 07:59:38.403089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.138 ms 00:23:15.970 [2024-11-06 07:59:38.403102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.970 [2024-11-06 07:59:38.434798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.970 [2024-11-06 07:59:38.434888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:15.970 [2024-11-06 07:59:38.434911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.506 ms 00:23:15.970 [2024-11-06 07:59:38.434923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.970 [2024-11-06 07:59:38.466027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.970 [2024-11-06 07:59:38.466114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:15.970 [2024-11-06 07:59:38.466138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.989 ms 00:23:15.970 [2024-11-06 07:59:38.466151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.970 [2024-11-06 07:59:38.497271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.970 [2024-11-06 07:59:38.497358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:15.970 [2024-11-06 07:59:38.497380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.938 ms 00:23:15.970 [2024-11-06 07:59:38.497393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.970 [2024-11-06 07:59:38.497503] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:15.970 [2024-11-06 07:59:38.497550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.497994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:15.970 [2024-11-06 07:59:38.498148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:15.971 [2024-11-06 07:59:38.498908] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:15.971 [2024-11-06 07:59:38.498920] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a89fd864-aa96-428e-abd4-47c1715fad37 00:23:15.971 [2024-11-06 07:59:38.498933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:15.971 [2024-11-06 07:59:38.498945] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:15.971 [2024-11-06 07:59:38.498957] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:15.971 [2024-11-06 07:59:38.498970] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:15.971 [2024-11-06 07:59:38.498981] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:15.971 [2024-11-06 07:59:38.498994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:15.971 [2024-11-06 07:59:38.499006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:15.971 [2024-11-06 07:59:38.499017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:15.971 [2024-11-06 07:59:38.499028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:15.971 [2024-11-06 07:59:38.499040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.971 [2024-11-06 07:59:38.499052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:15.971 [2024-11-06 07:59:38.499078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:23:15.971 [2024-11-06 07:59:38.499091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.971 [2024-11-06 07:59:38.517167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.971 [2024-11-06 07:59:38.517245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:15.971 [2024-11-06 07:59:38.517286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.041 ms 00:23:15.971 [2024-11-06 07:59:38.517300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.971 [2024-11-06 07:59:38.517915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.971 [2024-11-06 07:59:38.517972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:15.971 [2024-11-06 07:59:38.517989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:23:15.971 [2024-11-06 07:59:38.518001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.971 [2024-11-06 07:59:38.569089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.971 [2024-11-06 07:59:38.569187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:15.971 [2024-11-06 07:59:38.569209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.971 [2024-11-06 07:59:38.569223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.971 [2024-11-06 07:59:38.569428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.971 [2024-11-06 07:59:38.569466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:15.971 [2024-11-06 07:59:38.569494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.971 [2024-11-06 07:59:38.569506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.971 [2024-11-06 07:59:38.569594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.971 [2024-11-06 07:59:38.569615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:15.971 [2024-11-06 07:59:38.569644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.971 [2024-11-06 07:59:38.569656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.971 [2024-11-06 07:59:38.569685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:15.971 [2024-11-06 07:59:38.569699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:15.971 [2024-11-06 07:59:38.569726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:15.971 [2024-11-06 07:59:38.569739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.692134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.230 [2024-11-06 07:59:38.692231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.230 [2024-11-06 07:59:38.692284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.230 [2024-11-06 07:59:38.692300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.784902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.230 [2024-11-06 07:59:38.785015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.230 [2024-11-06 07:59:38.785047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.230 [2024-11-06 07:59:38.785062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.785177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.230 [2024-11-06 07:59:38.785196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.230 [2024-11-06 07:59:38.785210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.230 [2024-11-06 07:59:38.785222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.785297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.230 [2024-11-06 07:59:38.785315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.230 [2024-11-06 07:59:38.785343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.230 [2024-11-06 07:59:38.785362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.785509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.230 [2024-11-06 07:59:38.785530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.230 [2024-11-06 07:59:38.785544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.230 [2024-11-06 07:59:38.785556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.785621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.230 [2024-11-06 07:59:38.785640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:16.230 [2024-11-06 07:59:38.785654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.230 [2024-11-06 07:59:38.785666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.785733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.230 [2024-11-06 07:59:38.785749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.230 [2024-11-06 07:59:38.785761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.230 [2024-11-06 07:59:38.785773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.785840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.230 [2024-11-06 07:59:38.785859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.230 [2024-11-06 07:59:38.785872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.230 [2024-11-06 07:59:38.785889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.230 [2024-11-06 07:59:38.786098] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 486.262 ms, result 0 00:23:17.645 00:23:17.645 00:23:17.645 07:59:39 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76478 00:23:17.645 07:59:39 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:17.645 07:59:39 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76478 00:23:17.645 07:59:39 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76478 ']' 00:23:17.645 07:59:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.645 07:59:39 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.645 07:59:39 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.645 07:59:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.645 07:59:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:17.645 [2024-11-06 07:59:40.038812] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:23:17.646 [2024-11-06 07:59:40.038999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76478 ] 00:23:17.646 [2024-11-06 07:59:40.225187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.904 [2024-11-06 07:59:40.376348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.840 07:59:41 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.840 07:59:41 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:23:18.840 07:59:41 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:19.099 [2024-11-06 07:59:41.712787] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:19.099 [2024-11-06 07:59:41.712897] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:19.358 [2024-11-06 07:59:41.904818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.358 [2024-11-06 07:59:41.904901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:19.358 [2024-11-06 07:59:41.904930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:19.358 [2024-11-06 07:59:41.904944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.358 [2024-11-06 07:59:41.909226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.358 [2024-11-06 07:59:41.909442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:19.358 [2024-11-06 07:59:41.909480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.249 ms 00:23:19.358 [2024-11-06 07:59:41.909497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.358 [2024-11-06 07:59:41.909681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:19.358 [2024-11-06 07:59:41.910800] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:19.358 [2024-11-06 07:59:41.910849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.358 [2024-11-06 07:59:41.910865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:19.358 [2024-11-06 07:59:41.910882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:23:19.358 [2024-11-06 07:59:41.910895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.358 [2024-11-06 07:59:41.913460] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:19.358 [2024-11-06 07:59:41.932001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.358 [2024-11-06 07:59:41.932368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:19.358 [2024-11-06 07:59:41.932405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.544 ms 00:23:19.358 [2024-11-06 07:59:41.932425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.358 [2024-11-06 07:59:41.932637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.358 [2024-11-06 07:59:41.932671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:19.358 [2024-11-06 07:59:41.932687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:19.358 [2024-11-06 07:59:41.932703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.358 [2024-11-06 07:59:41.945640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.358 [2024-11-06 07:59:41.945750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:19.358 [2024-11-06 07:59:41.945774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.843 ms 00:23:19.358 [2024-11-06 07:59:41.945792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.358 [2024-11-06 07:59:41.946068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.358 [2024-11-06 07:59:41.946095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:19.359 [2024-11-06 07:59:41.946111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:19.359 [2024-11-06 07:59:41.946127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.359 [2024-11-06 07:59:41.946176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.359 [2024-11-06 07:59:41.946219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:19.359 [2024-11-06 07:59:41.946237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:19.359 [2024-11-06 07:59:41.946283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.359 [2024-11-06 07:59:41.946345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:19.359 [2024-11-06 07:59:41.952437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.359 [2024-11-06 07:59:41.952482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:19.359 [2024-11-06 07:59:41.952525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.109 ms 00:23:19.359 [2024-11-06 07:59:41.952540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.359 [2024-11-06 07:59:41.952652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.359 [2024-11-06 07:59:41.952674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:19.359 [2024-11-06 07:59:41.952696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:19.359 [2024-11-06 07:59:41.952710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.359 [2024-11-06 07:59:41.952763] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:19.359 [2024-11-06 07:59:41.952802] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:19.359 [2024-11-06 07:59:41.952872] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:19.359 [2024-11-06 07:59:41.952901] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:19.359 [2024-11-06 07:59:41.953043] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:19.359 [2024-11-06 07:59:41.953065] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:19.359 [2024-11-06 07:59:41.953085] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:19.359 [2024-11-06 07:59:41.953102] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:19.359 [2024-11-06 07:59:41.953127] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:19.359 [2024-11-06 07:59:41.953143] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:19.359 [2024-11-06 07:59:41.953159] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:19.359 [2024-11-06 07:59:41.953172] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:19.359 [2024-11-06 07:59:41.953191] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:19.359 [2024-11-06 07:59:41.953207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.359 [2024-11-06 07:59:41.953222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:19.359 [2024-11-06 07:59:41.953236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:23:19.359 [2024-11-06 07:59:41.953272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.359 [2024-11-06 07:59:41.953381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.359 [2024-11-06 07:59:41.953407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:19.359 [2024-11-06 07:59:41.953422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:19.359 [2024-11-06 07:59:41.953437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.359 [2024-11-06 07:59:41.953560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:19.359 [2024-11-06 07:59:41.953584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:19.359 [2024-11-06 07:59:41.953600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.359 [2024-11-06 07:59:41.953617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.359 [2024-11-06 07:59:41.953640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:19.359 [2024-11-06 07:59:41.953666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:19.359 [2024-11-06 07:59:41.953679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:19.359 [2024-11-06 07:59:41.953707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:19.359 [2024-11-06 07:59:41.953722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:19.359 [2024-11-06 07:59:41.953741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.359 [2024-11-06 07:59:41.953753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:19.359 [2024-11-06 07:59:41.953772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:19.359 [2024-11-06 07:59:41.953785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:19.359 [2024-11-06 07:59:41.953803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:19.359 [2024-11-06 07:59:41.953817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:19.359 [2024-11-06 07:59:41.953835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.359 [2024-11-06 07:59:41.953848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:19.359 [2024-11-06 07:59:41.953865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:19.359 [2024-11-06 07:59:41.953879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.359 [2024-11-06 07:59:41.953897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:19.359 [2024-11-06 07:59:41.953927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:19.359 [2024-11-06 07:59:41.953947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.359 [2024-11-06 07:59:41.953960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:19.359 [2024-11-06 07:59:41.953984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:19.359 [2024-11-06 07:59:41.953998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.359 [2024-11-06 07:59:41.954016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:19.359 [2024-11-06 07:59:41.954030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:19.359 [2024-11-06 07:59:41.954048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.359 [2024-11-06 07:59:41.954061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:19.359 [2024-11-06 07:59:41.954079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:19.359 [2024-11-06 07:59:41.954092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:19.359 [2024-11-06 07:59:41.954113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:19.359 [2024-11-06 07:59:41.954127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:19.359 [2024-11-06 07:59:41.954145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.359 [2024-11-06 07:59:41.954159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:19.359 [2024-11-06 07:59:41.954179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:19.359 [2024-11-06 07:59:41.954192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:19.359 [2024-11-06 07:59:41.954210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:19.359 [2024-11-06 07:59:41.954223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:19.359 [2024-11-06 07:59:41.954261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.359 [2024-11-06 07:59:41.954277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:19.359 [2024-11-06 07:59:41.954297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:19.359 [2024-11-06 07:59:41.954310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.359 [2024-11-06 07:59:41.954329] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:19.359 [2024-11-06 07:59:41.954344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:19.359 [2024-11-06 07:59:41.954364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:19.359 [2024-11-06 07:59:41.954386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:19.359 [2024-11-06 07:59:41.954405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:19.359 [2024-11-06 07:59:41.954421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:19.359 [2024-11-06 07:59:41.954441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:19.359 [2024-11-06 07:59:41.954455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:19.359 [2024-11-06 07:59:41.954474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:19.359 [2024-11-06 07:59:41.954487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:19.359 [2024-11-06 07:59:41.954508] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:19.359 [2024-11-06 07:59:41.954526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.359 [2024-11-06 07:59:41.954556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:19.359 [2024-11-06 07:59:41.954571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:19.359 [2024-11-06 07:59:41.954589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:19.359 [2024-11-06 07:59:41.954603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:19.359 [2024-11-06 07:59:41.954621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:19.359 [2024-11-06 07:59:41.954635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:19.359 [2024-11-06 07:59:41.954650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:19.359 [2024-11-06 07:59:41.954663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:19.359 [2024-11-06 07:59:41.954678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:19.359 [2024-11-06 07:59:41.954691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:19.360 [2024-11-06 07:59:41.954706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:19.360 [2024-11-06 07:59:41.954719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:19.360 [2024-11-06 07:59:41.954735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:19.360 [2024-11-06 07:59:41.954747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:19.360 [2024-11-06 07:59:41.954764] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:19.360 [2024-11-06 07:59:41.954780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:19.360 [2024-11-06 07:59:41.954800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:19.360 [2024-11-06 07:59:41.954815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:19.360 [2024-11-06 07:59:41.954830] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:19.360 [2024-11-06 07:59:41.954843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:19.360 [2024-11-06 07:59:41.954861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.360 [2024-11-06 07:59:41.954875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:19.360 [2024-11-06 07:59:41.954902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:23:19.360 [2024-11-06 07:59:41.954915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.002479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.002569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:19.619 [2024-11-06 07:59:42.002614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.460 ms 00:23:19.619 [2024-11-06 07:59:42.002629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.002909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.002931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:19.619 [2024-11-06 07:59:42.002950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:19.619 [2024-11-06 07:59:42.002964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.054715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.054802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:19.619 [2024-11-06 07:59:42.054845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.703 ms 00:23:19.619 [2024-11-06 07:59:42.054862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.055040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.055061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:19.619 [2024-11-06 07:59:42.055084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:19.619 [2024-11-06 07:59:42.055098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.055922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.055961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:19.619 [2024-11-06 07:59:42.055985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:23:19.619 [2024-11-06 07:59:42.056006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.056233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.056272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:19.619 [2024-11-06 07:59:42.056296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:23:19.619 [2024-11-06 07:59:42.056310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.082618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.082723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:19.619 [2024-11-06 07:59:42.082756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.260 ms 00:23:19.619 [2024-11-06 07:59:42.082772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.101798] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:19.619 [2024-11-06 07:59:42.102200] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:19.619 [2024-11-06 07:59:42.102245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.102297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:19.619 [2024-11-06 07:59:42.102321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.235 ms 00:23:19.619 [2024-11-06 07:59:42.102335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.619 [2024-11-06 07:59:42.134075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.619 [2024-11-06 07:59:42.134195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:19.619 [2024-11-06 07:59:42.134229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.527 ms 00:23:19.619 [2024-11-06 07:59:42.134245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.620 [2024-11-06 07:59:42.152443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.620 [2024-11-06 07:59:42.152532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:19.620 [2024-11-06 07:59:42.152588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.950 ms 00:23:19.620 [2024-11-06 07:59:42.152602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.620 [2024-11-06 07:59:42.169543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.620 [2024-11-06 07:59:42.169865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:19.620 [2024-11-06 07:59:42.169912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.732 ms 00:23:19.620 [2024-11-06 07:59:42.169929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.620 [2024-11-06 07:59:42.171098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.620 [2024-11-06 07:59:42.171135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:19.620 [2024-11-06 07:59:42.171161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:23:19.620 [2024-11-06 07:59:42.171176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.278907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.879 [2024-11-06 07:59:42.279010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:19.879 [2024-11-06 07:59:42.279045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.680 ms 00:23:19.879 [2024-11-06 07:59:42.279061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.295841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:19.879 [2024-11-06 07:59:42.324386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.879 [2024-11-06 07:59:42.324529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:19.879 [2024-11-06 07:59:42.324556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.067 ms 00:23:19.879 [2024-11-06 07:59:42.324580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.324832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.879 [2024-11-06 07:59:42.324862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:19.879 [2024-11-06 07:59:42.324880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:19.879 [2024-11-06 07:59:42.324903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.325001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.879 [2024-11-06 07:59:42.325037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:19.879 [2024-11-06 07:59:42.325062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:19.879 [2024-11-06 07:59:42.325083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.325144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.879 [2024-11-06 07:59:42.325169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:19.879 [2024-11-06 07:59:42.325184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:19.879 [2024-11-06 07:59:42.325208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.325300] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:19.879 [2024-11-06 07:59:42.325330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.879 [2024-11-06 07:59:42.325344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:19.879 [2024-11-06 07:59:42.325374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:19.879 [2024-11-06 07:59:42.325393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.359774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.879 [2024-11-06 07:59:42.359863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:19.879 [2024-11-06 07:59:42.359913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.318 ms 00:23:19.879 [2024-11-06 07:59:42.359929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.360173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.879 [2024-11-06 07:59:42.360195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:19.879 [2024-11-06 07:59:42.360219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:19.879 [2024-11-06 07:59:42.360233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.879 [2024-11-06 07:59:42.361944] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:19.879 [2024-11-06 07:59:42.366782] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 456.625 ms, result 0 00:23:19.879 [2024-11-06 07:59:42.368142] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.879 Some configs were skipped because the RPC state that can call them passed over. 00:23:19.879 07:59:42 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:20.139 [2024-11-06 07:59:42.707113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.139 [2024-11-06 07:59:42.707519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:20.139 [2024-11-06 07:59:42.707676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:23:20.139 [2024-11-06 07:59:42.707808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.139 [2024-11-06 07:59:42.708019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.801 ms, result 0 00:23:20.139 true 00:23:20.139 07:59:42 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:20.706 [2024-11-06 07:59:43.058961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.706 [2024-11-06 07:59:43.059060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:20.706 [2024-11-06 07:59:43.059091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.274 ms 00:23:20.706 [2024-11-06 07:59:43.059105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.706 [2024-11-06 07:59:43.059177] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.499 ms, result 0 00:23:20.706 true 00:23:20.706 07:59:43 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76478 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76478 ']' 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76478 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76478 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76478' 00:23:20.706 killing process with pid 76478 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76478 00:23:20.706 07:59:43 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76478 00:23:21.644 [2024-11-06 07:59:44.258051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.644 [2024-11-06 07:59:44.258176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:21.644 [2024-11-06 07:59:44.258201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:21.644 [2024-11-06 07:59:44.258217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.644 [2024-11-06 07:59:44.258258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:21.644 [2024-11-06 07:59:44.262437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.644 [2024-11-06 07:59:44.262500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:21.644 [2024-11-06 07:59:44.262529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:23:21.644 [2024-11-06 07:59:44.262542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.644 [2024-11-06 07:59:44.262962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.644 [2024-11-06 07:59:44.263018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:21.644 [2024-11-06 07:59:44.263038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:23:21.644 [2024-11-06 07:59:44.263050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.644 [2024-11-06 07:59:44.267569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.644 [2024-11-06 07:59:44.267804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:21.644 [2024-11-06 07:59:44.267839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.461 ms 00:23:21.644 [2024-11-06 07:59:44.267858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.275470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.909 [2024-11-06 07:59:44.275521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:21.909 [2024-11-06 07:59:44.275546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.549 ms 00:23:21.909 [2024-11-06 07:59:44.275559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.289507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.909 [2024-11-06 07:59:44.289599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:21.909 [2024-11-06 07:59:44.289645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.800 ms 00:23:21.909 [2024-11-06 07:59:44.289676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.299930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.909 [2024-11-06 07:59:44.300011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:21.909 [2024-11-06 07:59:44.300069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.180 ms 00:23:21.909 [2024-11-06 07:59:44.300083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.300319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.909 [2024-11-06 07:59:44.300346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:21.909 [2024-11-06 07:59:44.300365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:23:21.909 [2024-11-06 07:59:44.300379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.314181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.909 [2024-11-06 07:59:44.314522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:21.909 [2024-11-06 07:59:44.314570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.747 ms 00:23:21.909 [2024-11-06 07:59:44.314586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.327510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.909 [2024-11-06 07:59:44.327572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:21.909 [2024-11-06 07:59:44.327629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.821 ms 00:23:21.909 [2024-11-06 07:59:44.327644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.340019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.909 [2024-11-06 07:59:44.340090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:21.909 [2024-11-06 07:59:44.340134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.295 ms 00:23:21.909 [2024-11-06 07:59:44.340149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.352279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.909 [2024-11-06 07:59:44.352337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:21.909 [2024-11-06 07:59:44.352383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.010 ms 00:23:21.909 [2024-11-06 07:59:44.352397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.909 [2024-11-06 07:59:44.352494] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:21.909 [2024-11-06 07:59:44.352524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.352993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:21.909 [2024-11-06 07:59:44.353326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.353993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:21.910 [2024-11-06 07:59:44.354378] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:21.910 [2024-11-06 07:59:44.354404] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a89fd864-aa96-428e-abd4-47c1715fad37 00:23:21.910 [2024-11-06 07:59:44.354447] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:21.910 [2024-11-06 07:59:44.354466] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:21.910 [2024-11-06 07:59:44.354484] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:21.910 [2024-11-06 07:59:44.354500] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:21.910 [2024-11-06 07:59:44.354513] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:21.910 [2024-11-06 07:59:44.354534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:21.910 [2024-11-06 07:59:44.354548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:21.910 [2024-11-06 07:59:44.354565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:21.910 [2024-11-06 07:59:44.354578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:21.910 [2024-11-06 07:59:44.354598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.910 [2024-11-06 07:59:44.354612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:21.910 [2024-11-06 07:59:44.354632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:23:21.910 [2024-11-06 07:59:44.354649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.910 [2024-11-06 07:59:44.372792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.910 [2024-11-06 07:59:44.372866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:21.910 [2024-11-06 07:59:44.372904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.067 ms 00:23:21.910 [2024-11-06 07:59:44.372920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.910 [2024-11-06 07:59:44.373603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.910 [2024-11-06 07:59:44.373641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:21.910 [2024-11-06 07:59:44.373667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:23:21.910 [2024-11-06 07:59:44.373682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.910 [2024-11-06 07:59:44.436522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.910 [2024-11-06 07:59:44.436630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.910 [2024-11-06 07:59:44.436674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.910 [2024-11-06 07:59:44.436688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.910 [2024-11-06 07:59:44.436908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.910 [2024-11-06 07:59:44.436928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.910 [2024-11-06 07:59:44.436946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.910 [2024-11-06 07:59:44.436960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.910 [2024-11-06 07:59:44.437064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.910 [2024-11-06 07:59:44.437086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.910 [2024-11-06 07:59:44.437107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.910 [2024-11-06 07:59:44.437120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.910 [2024-11-06 07:59:44.437166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.910 [2024-11-06 07:59:44.437183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.910 [2024-11-06 07:59:44.437199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.910 [2024-11-06 07:59:44.437213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.559604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.187 [2024-11-06 07:59:44.559717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.187 [2024-11-06 07:59:44.559744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.187 [2024-11-06 07:59:44.559759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.655147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.187 [2024-11-06 07:59:44.655303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.187 [2024-11-06 07:59:44.655333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.187 [2024-11-06 07:59:44.655347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.655517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.187 [2024-11-06 07:59:44.655538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:22.187 [2024-11-06 07:59:44.655567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.187 [2024-11-06 07:59:44.655581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.655631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.187 [2024-11-06 07:59:44.655648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:22.187 [2024-11-06 07:59:44.655665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.187 [2024-11-06 07:59:44.655678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.655829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.187 [2024-11-06 07:59:44.655853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:22.187 [2024-11-06 07:59:44.655880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.187 [2024-11-06 07:59:44.655894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.655960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.187 [2024-11-06 07:59:44.655980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:22.187 [2024-11-06 07:59:44.655998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.187 [2024-11-06 07:59:44.656011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.656102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.187 [2024-11-06 07:59:44.656136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:22.187 [2024-11-06 07:59:44.656157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.187 [2024-11-06 07:59:44.656171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.656278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:22.187 [2024-11-06 07:59:44.656302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:22.187 [2024-11-06 07:59:44.656320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:22.187 [2024-11-06 07:59:44.656333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.187 [2024-11-06 07:59:44.656619] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.540 ms, result 0 00:23:23.129 07:59:45 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:23.387 [2024-11-06 07:59:45.842170] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:23:23.387 [2024-11-06 07:59:45.842585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76547 ] 00:23:23.646 [2024-11-06 07:59:46.018052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.646 [2024-11-06 07:59:46.167740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.215 [2024-11-06 07:59:46.582978] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.215 [2024-11-06 07:59:46.583112] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:24.215 [2024-11-06 07:59:46.753274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.215 [2024-11-06 07:59:46.753384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:24.215 [2024-11-06 07:59:46.753424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:24.215 [2024-11-06 07:59:46.753437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.215 [2024-11-06 07:59:46.757238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.215 [2024-11-06 07:59:46.757298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:24.215 [2024-11-06 07:59:46.757319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:23:24.215 [2024-11-06 07:59:46.757331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.215 [2024-11-06 07:59:46.757485] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:24.215 [2024-11-06 07:59:46.758552] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:24.215 [2024-11-06 07:59:46.758620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.215 [2024-11-06 07:59:46.758646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:24.215 [2024-11-06 07:59:46.758667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:23:24.215 [2024-11-06 07:59:46.758687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.215 [2024-11-06 07:59:46.761434] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:24.215 [2024-11-06 07:59:46.779045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.215 [2024-11-06 07:59:46.779120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:24.215 [2024-11-06 07:59:46.779150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.612 ms 00:23:24.215 [2024-11-06 07:59:46.779164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.215 [2024-11-06 07:59:46.779397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.215 [2024-11-06 07:59:46.779428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:24.216 [2024-11-06 07:59:46.779445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:24.216 [2024-11-06 07:59:46.779459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.216 [2024-11-06 07:59:46.791778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.216 [2024-11-06 07:59:46.791856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:24.216 [2024-11-06 07:59:46.791896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.237 ms 00:23:24.216 [2024-11-06 07:59:46.791910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.216 [2024-11-06 07:59:46.792161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.216 [2024-11-06 07:59:46.792187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:24.216 [2024-11-06 07:59:46.792203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:24.216 [2024-11-06 07:59:46.792217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.216 [2024-11-06 07:59:46.792290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.216 [2024-11-06 07:59:46.792319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:24.216 [2024-11-06 07:59:46.792341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:24.216 [2024-11-06 07:59:46.792354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.216 [2024-11-06 07:59:46.792399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:24.216 [2024-11-06 07:59:46.798187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.216 [2024-11-06 07:59:46.798515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:24.216 [2024-11-06 07:59:46.798548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.802 ms 00:23:24.216 [2024-11-06 07:59:46.798562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.216 [2024-11-06 07:59:46.798653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.216 [2024-11-06 07:59:46.798673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:24.216 [2024-11-06 07:59:46.798688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:24.216 [2024-11-06 07:59:46.798700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.216 [2024-11-06 07:59:46.798738] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:24.216 [2024-11-06 07:59:46.798775] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:24.216 [2024-11-06 07:59:46.798841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:24.216 [2024-11-06 07:59:46.798866] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:24.216 [2024-11-06 07:59:46.799001] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:24.216 [2024-11-06 07:59:46.799026] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:24.216 [2024-11-06 07:59:46.799043] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:24.216 [2024-11-06 07:59:46.799061] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799077] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799105] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:24.216 [2024-11-06 07:59:46.799118] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:24.216 [2024-11-06 07:59:46.799131] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:24.216 [2024-11-06 07:59:46.799144] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:24.216 [2024-11-06 07:59:46.799158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.216 [2024-11-06 07:59:46.799171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:24.216 [2024-11-06 07:59:46.799185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:23:24.216 [2024-11-06 07:59:46.799198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.216 [2024-11-06 07:59:46.799323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.216 [2024-11-06 07:59:46.799343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:24.216 [2024-11-06 07:59:46.799358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:23:24.216 [2024-11-06 07:59:46.799378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.216 [2024-11-06 07:59:46.799501] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:24.216 [2024-11-06 07:59:46.799520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:24.216 [2024-11-06 07:59:46.799534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:24.216 [2024-11-06 07:59:46.799573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:24.216 [2024-11-06 07:59:46.799608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.216 [2024-11-06 07:59:46.799631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:24.216 [2024-11-06 07:59:46.799643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:24.216 [2024-11-06 07:59:46.799654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:24.216 [2024-11-06 07:59:46.799682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:24.216 [2024-11-06 07:59:46.799695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:24.216 [2024-11-06 07:59:46.799706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:24.216 [2024-11-06 07:59:46.799733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:24.216 [2024-11-06 07:59:46.799770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:24.216 [2024-11-06 07:59:46.799806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:24.216 [2024-11-06 07:59:46.799841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:24.216 [2024-11-06 07:59:46.799877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:24.216 [2024-11-06 07:59:46.799902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:24.216 [2024-11-06 07:59:46.799914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:24.216 [2024-11-06 07:59:46.799926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.216 [2024-11-06 07:59:46.799938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:24.216 [2024-11-06 07:59:46.799950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:24.216 [2024-11-06 07:59:46.799961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:24.216 [2024-11-06 07:59:46.799973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:24.216 [2024-11-06 07:59:46.799985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:24.216 [2024-11-06 07:59:46.799997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.216 [2024-11-06 07:59:46.800009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:24.216 [2024-11-06 07:59:46.800021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:24.216 [2024-11-06 07:59:46.800033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.216 [2024-11-06 07:59:46.800045] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:24.216 [2024-11-06 07:59:46.800058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:24.216 [2024-11-06 07:59:46.800070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:24.216 [2024-11-06 07:59:46.800082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:24.216 [2024-11-06 07:59:46.800101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:24.216 [2024-11-06 07:59:46.800113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:24.216 [2024-11-06 07:59:46.800126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:24.216 [2024-11-06 07:59:46.800138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:24.216 [2024-11-06 07:59:46.800150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:24.216 [2024-11-06 07:59:46.800162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:24.216 [2024-11-06 07:59:46.800176] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:24.216 [2024-11-06 07:59:46.800199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.216 [2024-11-06 07:59:46.800214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:24.216 [2024-11-06 07:59:46.800227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:24.216 [2024-11-06 07:59:46.800240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:24.216 [2024-11-06 07:59:46.800270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:24.216 [2024-11-06 07:59:46.800284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:24.217 [2024-11-06 07:59:46.800297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:24.217 [2024-11-06 07:59:46.800310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:24.217 [2024-11-06 07:59:46.800323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:24.217 [2024-11-06 07:59:46.800335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:24.217 [2024-11-06 07:59:46.800347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:24.217 [2024-11-06 07:59:46.800360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:24.217 [2024-11-06 07:59:46.800373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:24.217 [2024-11-06 07:59:46.800385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:24.217 [2024-11-06 07:59:46.800399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:24.217 [2024-11-06 07:59:46.800412] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:24.217 [2024-11-06 07:59:46.800427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:24.217 [2024-11-06 07:59:46.800441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:24.217 [2024-11-06 07:59:46.800455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:24.217 [2024-11-06 07:59:46.800468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:24.217 [2024-11-06 07:59:46.800480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:24.217 [2024-11-06 07:59:46.800494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.217 [2024-11-06 07:59:46.800508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:24.217 [2024-11-06 07:59:46.800521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:23:24.217 [2024-11-06 07:59:46.800540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.847983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.848364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:24.476 [2024-11-06 07:59:46.848400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.355 ms 00:23:24.476 [2024-11-06 07:59:46.848416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.848699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.848726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:24.476 [2024-11-06 07:59:46.848747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:24.476 [2024-11-06 07:59:46.848759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.908294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.908651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:24.476 [2024-11-06 07:59:46.908784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.479 ms 00:23:24.476 [2024-11-06 07:59:46.908849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.909177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.909348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:24.476 [2024-11-06 07:59:46.909470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:24.476 [2024-11-06 07:59:46.909525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.910431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.910570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:24.476 [2024-11-06 07:59:46.910683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:23:24.476 [2024-11-06 07:59:46.910813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.911080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.911145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:24.476 [2024-11-06 07:59:46.911268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:23:24.476 [2024-11-06 07:59:46.911324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.934304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.934647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:24.476 [2024-11-06 07:59:46.934790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.906 ms 00:23:24.476 [2024-11-06 07:59:46.934843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.952871] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:24.476 [2024-11-06 07:59:46.953228] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:24.476 [2024-11-06 07:59:46.953486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.953595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:24.476 [2024-11-06 07:59:46.953653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.352 ms 00:23:24.476 [2024-11-06 07:59:46.953770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:46.985086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:46.985496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:24.476 [2024-11-06 07:59:46.985620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.080 ms 00:23:24.476 [2024-11-06 07:59:46.985673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:47.004278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:47.004519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:24.476 [2024-11-06 07:59:47.004642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.359 ms 00:23:24.476 [2024-11-06 07:59:47.004694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:47.020596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:47.020813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:24.476 [2024-11-06 07:59:47.020936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.636 ms 00:23:24.476 [2024-11-06 07:59:47.021062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.476 [2024-11-06 07:59:47.022330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.476 [2024-11-06 07:59:47.022477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:24.476 [2024-11-06 07:59:47.022592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:23:24.476 [2024-11-06 07:59:47.022705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.735 [2024-11-06 07:59:47.108558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.735 [2024-11-06 07:59:47.108667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:24.736 [2024-11-06 07:59:47.108691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.770 ms 00:23:24.736 [2024-11-06 07:59:47.108706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.736 [2024-11-06 07:59:47.123753] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:24.736 [2024-11-06 07:59:47.151263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.736 [2024-11-06 07:59:47.151376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:24.736 [2024-11-06 07:59:47.151401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.320 ms 00:23:24.736 [2024-11-06 07:59:47.151415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.736 [2024-11-06 07:59:47.151636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.736 [2024-11-06 07:59:47.151664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:24.736 [2024-11-06 07:59:47.151679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:24.736 [2024-11-06 07:59:47.151693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.736 [2024-11-06 07:59:47.151815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.736 [2024-11-06 07:59:47.151845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:24.736 [2024-11-06 07:59:47.151859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:24.736 [2024-11-06 07:59:47.151871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.736 [2024-11-06 07:59:47.151933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.736 [2024-11-06 07:59:47.151954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:24.736 [2024-11-06 07:59:47.151972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:24.736 [2024-11-06 07:59:47.151985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.736 [2024-11-06 07:59:47.152043] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:24.736 [2024-11-06 07:59:47.152062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.736 [2024-11-06 07:59:47.152074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:24.736 [2024-11-06 07:59:47.152087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:24.736 [2024-11-06 07:59:47.152100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.736 [2024-11-06 07:59:47.185053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.736 [2024-11-06 07:59:47.185153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:24.736 [2024-11-06 07:59:47.185179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.919 ms 00:23:24.736 [2024-11-06 07:59:47.185193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.736 [2024-11-06 07:59:47.185428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:24.736 [2024-11-06 07:59:47.185453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:24.736 [2024-11-06 07:59:47.185469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:24.736 [2024-11-06 07:59:47.185482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:24.736 [2024-11-06 07:59:47.186972] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:24.736 [2024-11-06 07:59:47.191572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 433.307 ms, result 0 00:23:24.736 [2024-11-06 07:59:47.192670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:24.736 [2024-11-06 07:59:47.208938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:25.672  [2024-11-06T07:59:49.678Z] Copying: 25/256 [MB] (25 MBps) [2024-11-06T07:59:50.612Z] Copying: 49/256 [MB] (23 MBps) [2024-11-06T07:59:51.548Z] Copying: 71/256 [MB] (21 MBps) [2024-11-06T07:59:52.482Z] Copying: 93/256 [MB] (22 MBps) [2024-11-06T07:59:53.417Z] Copying: 115/256 [MB] (22 MBps) [2024-11-06T07:59:54.354Z] Copying: 137/256 [MB] (22 MBps) [2024-11-06T07:59:55.290Z] Copying: 159/256 [MB] (21 MBps) [2024-11-06T07:59:56.666Z] Copying: 181/256 [MB] (22 MBps) [2024-11-06T07:59:57.602Z] Copying: 203/256 [MB] (21 MBps) [2024-11-06T07:59:58.537Z] Copying: 226/256 [MB] (22 MBps) [2024-11-06T07:59:58.796Z] Copying: 248/256 [MB] (22 MBps) [2024-11-06T07:59:59.055Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-06 07:59:58.932990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.426 [2024-11-06 07:59:58.947348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.426 [2024-11-06 07:59:58.947427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:36.426 [2024-11-06 07:59:58.947453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:36.426 [2024-11-06 07:59:58.947467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.426 [2024-11-06 07:59:58.947522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:36.426 [2024-11-06 07:59:58.952027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.426 [2024-11-06 07:59:58.952080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:36.426 [2024-11-06 07:59:58.952114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.480 ms 00:23:36.426 [2024-11-06 07:59:58.952127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.427 [2024-11-06 07:59:58.952623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.427 [2024-11-06 07:59:58.952655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:36.427 [2024-11-06 07:59:58.952671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:23:36.427 [2024-11-06 07:59:58.952683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.427 [2024-11-06 07:59:58.956378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.427 [2024-11-06 07:59:58.956410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:36.427 [2024-11-06 07:59:58.956449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:23:36.427 [2024-11-06 07:59:58.956462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.427 [2024-11-06 07:59:58.964701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.427 [2024-11-06 07:59:58.964739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:36.427 [2024-11-06 07:59:58.964771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.211 ms 00:23:36.427 [2024-11-06 07:59:58.964783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.427 [2024-11-06 07:59:58.996103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.427 [2024-11-06 07:59:58.996172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:36.427 [2024-11-06 07:59:58.996210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.215 ms 00:23:36.427 [2024-11-06 07:59:58.996223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.427 [2024-11-06 07:59:59.013847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.427 [2024-11-06 07:59:59.014174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:36.427 [2024-11-06 07:59:59.014214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.531 ms 00:23:36.427 [2024-11-06 07:59:59.014234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.427 [2024-11-06 07:59:59.014441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.427 [2024-11-06 07:59:59.014464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:36.427 [2024-11-06 07:59:59.014480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:36.427 [2024-11-06 07:59:59.014493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.427 [2024-11-06 07:59:59.045886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.427 [2024-11-06 07:59:59.045953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:36.427 [2024-11-06 07:59:59.045974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.347 ms 00:23:36.427 [2024-11-06 07:59:59.045986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.686 [2024-11-06 07:59:59.075614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.686 [2024-11-06 07:59:59.075695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:36.686 [2024-11-06 07:59:59.075716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.557 ms 00:23:36.686 [2024-11-06 07:59:59.075729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.686 [2024-11-06 07:59:59.104386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.686 [2024-11-06 07:59:59.104448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:36.686 [2024-11-06 07:59:59.104468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.596 ms 00:23:36.686 [2024-11-06 07:59:59.104480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.686 [2024-11-06 07:59:59.133137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.686 [2024-11-06 07:59:59.133201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:36.686 [2024-11-06 07:59:59.133222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.559 ms 00:23:36.686 [2024-11-06 07:59:59.133235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.686 [2024-11-06 07:59:59.133311] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:36.686 [2024-11-06 07:59:59.133349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:36.686 [2024-11-06 07:59:59.133622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.133990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:36.687 [2024-11-06 07:59:59.134782] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:36.687 [2024-11-06 07:59:59.134794] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a89fd864-aa96-428e-abd4-47c1715fad37 00:23:36.687 [2024-11-06 07:59:59.134808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:36.687 [2024-11-06 07:59:59.134820] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:36.688 [2024-11-06 07:59:59.134832] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:36.688 [2024-11-06 07:59:59.134844] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:36.688 [2024-11-06 07:59:59.134856] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:36.688 [2024-11-06 07:59:59.134868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:36.688 [2024-11-06 07:59:59.134879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:36.688 [2024-11-06 07:59:59.134890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:36.688 [2024-11-06 07:59:59.134901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:36.688 [2024-11-06 07:59:59.134913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.688 [2024-11-06 07:59:59.134925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:36.688 [2024-11-06 07:59:59.134944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.604 ms 00:23:36.688 [2024-11-06 07:59:59.134956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.688 [2024-11-06 07:59:59.151990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.688 [2024-11-06 07:59:59.152069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:36.688 [2024-11-06 07:59:59.152088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.975 ms 00:23:36.688 [2024-11-06 07:59:59.152101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.688 [2024-11-06 07:59:59.152662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.688 [2024-11-06 07:59:59.152697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:36.688 [2024-11-06 07:59:59.152712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:23:36.688 [2024-11-06 07:59:59.152724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.688 [2024-11-06 07:59:59.200423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.688 [2024-11-06 07:59:59.200501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.688 [2024-11-06 07:59:59.200522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.688 [2024-11-06 07:59:59.200535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.688 [2024-11-06 07:59:59.200686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.688 [2024-11-06 07:59:59.200709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.688 [2024-11-06 07:59:59.200723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.688 [2024-11-06 07:59:59.200735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.688 [2024-11-06 07:59:59.200812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.688 [2024-11-06 07:59:59.200831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.688 [2024-11-06 07:59:59.200845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.688 [2024-11-06 07:59:59.200857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.688 [2024-11-06 07:59:59.200884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.688 [2024-11-06 07:59:59.200898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.688 [2024-11-06 07:59:59.200918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.688 [2024-11-06 07:59:59.200930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.316164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.947 [2024-11-06 07:59:59.316237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.947 [2024-11-06 07:59:59.316320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.947 [2024-11-06 07:59:59.316334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.405115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.947 [2024-11-06 07:59:59.405217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.947 [2024-11-06 07:59:59.405254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.947 [2024-11-06 07:59:59.405289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.405432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.947 [2024-11-06 07:59:59.405457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.947 [2024-11-06 07:59:59.405470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.947 [2024-11-06 07:59:59.405483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.405524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.947 [2024-11-06 07:59:59.405554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.947 [2024-11-06 07:59:59.405568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.947 [2024-11-06 07:59:59.405587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.405750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.947 [2024-11-06 07:59:59.405772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.947 [2024-11-06 07:59:59.405786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.947 [2024-11-06 07:59:59.405798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.405853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.947 [2024-11-06 07:59:59.405872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:36.947 [2024-11-06 07:59:59.405886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.947 [2024-11-06 07:59:59.405900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.405973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.947 [2024-11-06 07:59:59.405991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.947 [2024-11-06 07:59:59.406005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.947 [2024-11-06 07:59:59.406018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.406085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:36.947 [2024-11-06 07:59:59.406111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.947 [2024-11-06 07:59:59.406125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:36.947 [2024-11-06 07:59:59.406146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.947 [2024-11-06 07:59:59.406384] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 459.049 ms, result 0 00:23:37.881 00:23:37.881 00:23:37.881 08:00:00 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:38.447 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:38.447 08:00:01 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:38.447 08:00:01 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:38.447 08:00:01 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:38.447 08:00:01 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:38.447 08:00:01 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:38.706 08:00:01 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:38.706 08:00:01 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76478 00:23:38.706 08:00:01 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76478 ']' 00:23:38.706 08:00:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76478 00:23:38.706 Process with pid 76478 is not found 00:23:38.706 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76478) - No such process 00:23:38.706 08:00:01 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 76478 is not found' 00:23:38.706 00:23:38.706 real 1m16.249s 00:23:38.706 user 1m45.456s 00:23:38.706 sys 0m8.575s 00:23:38.706 ************************************ 00:23:38.706 END TEST ftl_trim 00:23:38.706 ************************************ 00:23:38.706 08:00:01 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:38.706 08:00:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:38.706 08:00:01 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:38.706 08:00:01 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:38.706 08:00:01 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:38.706 08:00:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:38.706 ************************************ 00:23:38.706 START TEST ftl_restore 00:23:38.706 ************************************ 00:23:38.706 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:38.706 * Looking for test storage... 00:23:38.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:38.706 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:38.706 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # lcov --version 00:23:38.706 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:38.964 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:38.964 08:00:01 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.964 08:00:01 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.964 08:00:01 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.964 08:00:01 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.964 08:00:01 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.964 08:00:01 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.964 08:00:01 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.964 08:00:01 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.965 08:00:01 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:38.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.965 --rc genhtml_branch_coverage=1 00:23:38.965 --rc genhtml_function_coverage=1 00:23:38.965 --rc genhtml_legend=1 00:23:38.965 --rc geninfo_all_blocks=1 00:23:38.965 --rc geninfo_unexecuted_blocks=1 00:23:38.965 00:23:38.965 ' 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:38.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.965 --rc genhtml_branch_coverage=1 00:23:38.965 --rc genhtml_function_coverage=1 00:23:38.965 --rc genhtml_legend=1 00:23:38.965 --rc geninfo_all_blocks=1 00:23:38.965 --rc geninfo_unexecuted_blocks=1 00:23:38.965 00:23:38.965 ' 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:38.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.965 --rc genhtml_branch_coverage=1 00:23:38.965 --rc genhtml_function_coverage=1 00:23:38.965 --rc genhtml_legend=1 00:23:38.965 --rc geninfo_all_blocks=1 00:23:38.965 --rc geninfo_unexecuted_blocks=1 00:23:38.965 00:23:38.965 ' 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:38.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.965 --rc genhtml_branch_coverage=1 00:23:38.965 --rc genhtml_function_coverage=1 00:23:38.965 --rc genhtml_legend=1 00:23:38.965 --rc geninfo_all_blocks=1 00:23:38.965 --rc geninfo_unexecuted_blocks=1 00:23:38.965 00:23:38.965 ' 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.kEmM4U7tps 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76772 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76772 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 76772 ']' 00:23:38.965 08:00:01 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.965 08:00:01 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:38.965 [2024-11-06 08:00:01.559858] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:23:38.965 [2024-11-06 08:00:01.560038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76772 ] 00:23:39.223 [2024-11-06 08:00:01.751806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.481 [2024-11-06 08:00:01.926474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.416 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.416 08:00:02 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:23:40.416 08:00:02 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:40.416 08:00:02 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:40.416 08:00:02 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:40.416 08:00:02 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:40.416 08:00:02 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:40.416 08:00:02 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:40.675 08:00:03 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:40.675 08:00:03 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:40.675 08:00:03 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:40.675 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:40.675 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:40.675 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:40.675 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:40.675 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:40.933 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:40.933 { 00:23:40.933 "name": "nvme0n1", 00:23:40.933 "aliases": [ 00:23:40.933 "7aa2cbb1-175e-4e49-a576-0b2856e89ed8" 00:23:40.933 ], 00:23:40.933 "product_name": "NVMe disk", 00:23:40.933 "block_size": 4096, 00:23:40.933 "num_blocks": 1310720, 00:23:40.933 "uuid": "7aa2cbb1-175e-4e49-a576-0b2856e89ed8", 00:23:40.933 "numa_id": -1, 00:23:40.933 "assigned_rate_limits": { 00:23:40.934 "rw_ios_per_sec": 0, 00:23:40.934 "rw_mbytes_per_sec": 0, 00:23:40.934 "r_mbytes_per_sec": 0, 00:23:40.934 "w_mbytes_per_sec": 0 00:23:40.934 }, 00:23:40.934 "claimed": true, 00:23:40.934 "claim_type": "read_many_write_one", 00:23:40.934 "zoned": false, 00:23:40.934 "supported_io_types": { 00:23:40.934 "read": true, 00:23:40.934 "write": true, 00:23:40.934 "unmap": true, 00:23:40.934 "flush": true, 00:23:40.934 "reset": true, 00:23:40.934 "nvme_admin": true, 00:23:40.934 "nvme_io": true, 00:23:40.934 "nvme_io_md": false, 00:23:40.934 "write_zeroes": true, 00:23:40.934 "zcopy": false, 00:23:40.934 "get_zone_info": false, 00:23:40.934 "zone_management": false, 00:23:40.934 "zone_append": false, 00:23:40.934 "compare": true, 00:23:40.934 "compare_and_write": false, 00:23:40.934 "abort": true, 00:23:40.934 "seek_hole": false, 00:23:40.934 "seek_data": false, 00:23:40.934 "copy": true, 00:23:40.934 "nvme_iov_md": false 00:23:40.934 }, 00:23:40.934 "driver_specific": { 00:23:40.934 "nvme": [ 00:23:40.934 { 00:23:40.934 "pci_address": "0000:00:11.0", 00:23:40.934 "trid": { 00:23:40.934 "trtype": "PCIe", 00:23:40.934 "traddr": "0000:00:11.0" 00:23:40.934 }, 00:23:40.934 "ctrlr_data": { 00:23:40.934 "cntlid": 0, 00:23:40.934 "vendor_id": "0x1b36", 00:23:40.934 "model_number": "QEMU NVMe Ctrl", 00:23:40.934 "serial_number": "12341", 00:23:40.934 "firmware_revision": "8.0.0", 00:23:40.934 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:40.934 "oacs": { 00:23:40.934 "security": 0, 00:23:40.934 "format": 1, 00:23:40.934 "firmware": 0, 00:23:40.934 "ns_manage": 1 00:23:40.934 }, 00:23:40.934 "multi_ctrlr": false, 00:23:40.934 "ana_reporting": false 00:23:40.934 }, 00:23:40.934 "vs": { 00:23:40.934 "nvme_version": "1.4" 00:23:40.934 }, 00:23:40.934 "ns_data": { 00:23:40.934 "id": 1, 00:23:40.934 "can_share": false 00:23:40.934 } 00:23:40.934 } 00:23:40.934 ], 00:23:40.934 "mp_policy": "active_passive" 00:23:40.934 } 00:23:40.934 } 00:23:40.934 ]' 00:23:40.934 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:41.192 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:41.192 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:41.192 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:41.192 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:41.192 08:00:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:23:41.192 08:00:03 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:41.192 08:00:03 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:41.192 08:00:03 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:41.192 08:00:03 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:41.192 08:00:03 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:41.450 08:00:03 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ebbe1287-1ea8-499a-81dd-b031ec62a928 00:23:41.450 08:00:03 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:41.450 08:00:03 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ebbe1287-1ea8-499a-81dd-b031ec62a928 00:23:41.708 08:00:04 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:41.966 08:00:04 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=7e01ea90-f844-4e76-9e66-c690124987ee 00:23:41.966 08:00:04 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7e01ea90-f844-4e76-9e66-c690124987ee 00:23:42.224 08:00:04 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:42.224 08:00:04 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:42.224 08:00:04 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:42.224 08:00:04 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:42.224 08:00:04 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:42.224 08:00:04 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:42.224 08:00:04 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:42.224 08:00:04 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:42.224 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:42.224 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:42.224 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:42.224 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:42.224 08:00:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:42.483 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:42.483 { 00:23:42.483 "name": "f976067d-2d53-4c91-bd0c-e0097aeb9455", 00:23:42.483 "aliases": [ 00:23:42.483 "lvs/nvme0n1p0" 00:23:42.483 ], 00:23:42.483 "product_name": "Logical Volume", 00:23:42.483 "block_size": 4096, 00:23:42.483 "num_blocks": 26476544, 00:23:42.483 "uuid": "f976067d-2d53-4c91-bd0c-e0097aeb9455", 00:23:42.483 "assigned_rate_limits": { 00:23:42.483 "rw_ios_per_sec": 0, 00:23:42.483 "rw_mbytes_per_sec": 0, 00:23:42.483 "r_mbytes_per_sec": 0, 00:23:42.483 "w_mbytes_per_sec": 0 00:23:42.483 }, 00:23:42.483 "claimed": false, 00:23:42.483 "zoned": false, 00:23:42.483 "supported_io_types": { 00:23:42.483 "read": true, 00:23:42.483 "write": true, 00:23:42.483 "unmap": true, 00:23:42.483 "flush": false, 00:23:42.483 "reset": true, 00:23:42.483 "nvme_admin": false, 00:23:42.483 "nvme_io": false, 00:23:42.483 "nvme_io_md": false, 00:23:42.483 "write_zeroes": true, 00:23:42.483 "zcopy": false, 00:23:42.483 "get_zone_info": false, 00:23:42.483 "zone_management": false, 00:23:42.483 "zone_append": false, 00:23:42.483 "compare": false, 00:23:42.483 "compare_and_write": false, 00:23:42.483 "abort": false, 00:23:42.483 "seek_hole": true, 00:23:42.483 "seek_data": true, 00:23:42.483 "copy": false, 00:23:42.483 "nvme_iov_md": false 00:23:42.483 }, 00:23:42.483 "driver_specific": { 00:23:42.483 "lvol": { 00:23:42.483 "lvol_store_uuid": "7e01ea90-f844-4e76-9e66-c690124987ee", 00:23:42.483 "base_bdev": "nvme0n1", 00:23:42.483 "thin_provision": true, 00:23:42.483 "num_allocated_clusters": 0, 00:23:42.483 "snapshot": false, 00:23:42.483 "clone": false, 00:23:42.483 "esnap_clone": false 00:23:42.483 } 00:23:42.483 } 00:23:42.483 } 00:23:42.483 ]' 00:23:42.483 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:42.743 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:42.743 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:42.743 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:42.743 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:42.743 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:42.743 08:00:05 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:42.743 08:00:05 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:42.743 08:00:05 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:43.006 08:00:05 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:43.006 08:00:05 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:43.006 08:00:05 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:43.006 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:43.006 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:43.006 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:43.006 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:43.006 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:43.264 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:43.264 { 00:23:43.264 "name": "f976067d-2d53-4c91-bd0c-e0097aeb9455", 00:23:43.264 "aliases": [ 00:23:43.264 "lvs/nvme0n1p0" 00:23:43.264 ], 00:23:43.264 "product_name": "Logical Volume", 00:23:43.264 "block_size": 4096, 00:23:43.264 "num_blocks": 26476544, 00:23:43.264 "uuid": "f976067d-2d53-4c91-bd0c-e0097aeb9455", 00:23:43.264 "assigned_rate_limits": { 00:23:43.264 "rw_ios_per_sec": 0, 00:23:43.264 "rw_mbytes_per_sec": 0, 00:23:43.264 "r_mbytes_per_sec": 0, 00:23:43.264 "w_mbytes_per_sec": 0 00:23:43.264 }, 00:23:43.264 "claimed": false, 00:23:43.264 "zoned": false, 00:23:43.264 "supported_io_types": { 00:23:43.264 "read": true, 00:23:43.264 "write": true, 00:23:43.264 "unmap": true, 00:23:43.264 "flush": false, 00:23:43.264 "reset": true, 00:23:43.264 "nvme_admin": false, 00:23:43.264 "nvme_io": false, 00:23:43.264 "nvme_io_md": false, 00:23:43.264 "write_zeroes": true, 00:23:43.264 "zcopy": false, 00:23:43.264 "get_zone_info": false, 00:23:43.264 "zone_management": false, 00:23:43.264 "zone_append": false, 00:23:43.264 "compare": false, 00:23:43.264 "compare_and_write": false, 00:23:43.264 "abort": false, 00:23:43.264 "seek_hole": true, 00:23:43.264 "seek_data": true, 00:23:43.264 "copy": false, 00:23:43.264 "nvme_iov_md": false 00:23:43.264 }, 00:23:43.264 "driver_specific": { 00:23:43.264 "lvol": { 00:23:43.264 "lvol_store_uuid": "7e01ea90-f844-4e76-9e66-c690124987ee", 00:23:43.264 "base_bdev": "nvme0n1", 00:23:43.264 "thin_provision": true, 00:23:43.264 "num_allocated_clusters": 0, 00:23:43.264 "snapshot": false, 00:23:43.264 "clone": false, 00:23:43.264 "esnap_clone": false 00:23:43.264 } 00:23:43.264 } 00:23:43.264 } 00:23:43.264 ]' 00:23:43.264 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:43.264 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:43.264 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:43.522 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:43.522 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:43.522 08:00:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:43.522 08:00:05 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:43.522 08:00:05 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:43.781 08:00:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:43.781 08:00:06 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:43.781 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:43.781 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:43.781 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:23:43.781 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:23:43.781 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f976067d-2d53-4c91-bd0c-e0097aeb9455 00:23:44.040 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:44.040 { 00:23:44.040 "name": "f976067d-2d53-4c91-bd0c-e0097aeb9455", 00:23:44.040 "aliases": [ 00:23:44.040 "lvs/nvme0n1p0" 00:23:44.040 ], 00:23:44.040 "product_name": "Logical Volume", 00:23:44.040 "block_size": 4096, 00:23:44.040 "num_blocks": 26476544, 00:23:44.040 "uuid": "f976067d-2d53-4c91-bd0c-e0097aeb9455", 00:23:44.040 "assigned_rate_limits": { 00:23:44.040 "rw_ios_per_sec": 0, 00:23:44.040 "rw_mbytes_per_sec": 0, 00:23:44.040 "r_mbytes_per_sec": 0, 00:23:44.040 "w_mbytes_per_sec": 0 00:23:44.040 }, 00:23:44.040 "claimed": false, 00:23:44.040 "zoned": false, 00:23:44.040 "supported_io_types": { 00:23:44.040 "read": true, 00:23:44.040 "write": true, 00:23:44.040 "unmap": true, 00:23:44.040 "flush": false, 00:23:44.040 "reset": true, 00:23:44.040 "nvme_admin": false, 00:23:44.040 "nvme_io": false, 00:23:44.040 "nvme_io_md": false, 00:23:44.040 "write_zeroes": true, 00:23:44.040 "zcopy": false, 00:23:44.040 "get_zone_info": false, 00:23:44.040 "zone_management": false, 00:23:44.040 "zone_append": false, 00:23:44.040 "compare": false, 00:23:44.040 "compare_and_write": false, 00:23:44.040 "abort": false, 00:23:44.040 "seek_hole": true, 00:23:44.040 "seek_data": true, 00:23:44.040 "copy": false, 00:23:44.040 "nvme_iov_md": false 00:23:44.040 }, 00:23:44.040 "driver_specific": { 00:23:44.040 "lvol": { 00:23:44.040 "lvol_store_uuid": "7e01ea90-f844-4e76-9e66-c690124987ee", 00:23:44.040 "base_bdev": "nvme0n1", 00:23:44.040 "thin_provision": true, 00:23:44.040 "num_allocated_clusters": 0, 00:23:44.040 "snapshot": false, 00:23:44.040 "clone": false, 00:23:44.040 "esnap_clone": false 00:23:44.040 } 00:23:44.040 } 00:23:44.040 } 00:23:44.040 ]' 00:23:44.040 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:44.040 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:23:44.040 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:44.040 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:44.040 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:44.040 08:00:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:23:44.040 08:00:06 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:44.040 08:00:06 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f976067d-2d53-4c91-bd0c-e0097aeb9455 --l2p_dram_limit 10' 00:23:44.040 08:00:06 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:44.040 08:00:06 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:44.040 08:00:06 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:44.040 08:00:06 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:44.040 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:44.040 08:00:06 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f976067d-2d53-4c91-bd0c-e0097aeb9455 --l2p_dram_limit 10 -c nvc0n1p0 00:23:44.300 [2024-11-06 08:00:06.895293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.895397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:44.300 [2024-11-06 08:00:06.895443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:44.300 [2024-11-06 08:00:06.895457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.895569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.895591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:44.300 [2024-11-06 08:00:06.895608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:44.300 [2024-11-06 08:00:06.895620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.895655] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:44.300 [2024-11-06 08:00:06.896745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:44.300 [2024-11-06 08:00:06.896788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.896802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:44.300 [2024-11-06 08:00:06.896826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:23:44.300 [2024-11-06 08:00:06.896838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.897007] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b72774d1-8924-47f6-808c-25def4de7f7d 00:23:44.300 [2024-11-06 08:00:06.899409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.899457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:44.300 [2024-11-06 08:00:06.899484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:44.300 [2024-11-06 08:00:06.899499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.912743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.912846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:44.300 [2024-11-06 08:00:06.912868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.145 ms 00:23:44.300 [2024-11-06 08:00:06.912889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.913082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.913108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:44.300 [2024-11-06 08:00:06.913123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:23:44.300 [2024-11-06 08:00:06.913143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.913314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.913347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:44.300 [2024-11-06 08:00:06.913363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:44.300 [2024-11-06 08:00:06.913378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.913420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.300 [2024-11-06 08:00:06.919110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.919153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:44.300 [2024-11-06 08:00:06.919195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.699 ms 00:23:44.300 [2024-11-06 08:00:06.919208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.919258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.919290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:44.300 [2024-11-06 08:00:06.919308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:44.300 [2024-11-06 08:00:06.919320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.919372] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:44.300 [2024-11-06 08:00:06.919536] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:44.300 [2024-11-06 08:00:06.919569] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:44.300 [2024-11-06 08:00:06.919587] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:44.300 [2024-11-06 08:00:06.919605] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:44.300 [2024-11-06 08:00:06.919619] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:44.300 [2024-11-06 08:00:06.919635] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:44.300 [2024-11-06 08:00:06.919647] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:44.300 [2024-11-06 08:00:06.919661] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:44.300 [2024-11-06 08:00:06.919673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:44.300 [2024-11-06 08:00:06.919692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.919706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:44.300 [2024-11-06 08:00:06.919722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:23:44.300 [2024-11-06 08:00:06.919766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.919867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.300 [2024-11-06 08:00:06.919882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:44.300 [2024-11-06 08:00:06.919898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:44.300 [2024-11-06 08:00:06.919909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.300 [2024-11-06 08:00:06.920028] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:44.300 [2024-11-06 08:00:06.920061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:44.300 [2024-11-06 08:00:06.920076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.300 [2024-11-06 08:00:06.920088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.300 [2024-11-06 08:00:06.920103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:44.300 [2024-11-06 08:00:06.920113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:44.300 [2024-11-06 08:00:06.920127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:44.300 [2024-11-06 08:00:06.920137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:44.300 [2024-11-06 08:00:06.920150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:44.300 [2024-11-06 08:00:06.920161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.300 [2024-11-06 08:00:06.920174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:44.300 [2024-11-06 08:00:06.920185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:44.300 [2024-11-06 08:00:06.920198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.300 [2024-11-06 08:00:06.920209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:44.300 [2024-11-06 08:00:06.920222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:44.301 [2024-11-06 08:00:06.920233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.301 [2024-11-06 08:00:06.920249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:44.301 [2024-11-06 08:00:06.920260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:44.301 [2024-11-06 08:00:06.920275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.301 [2024-11-06 08:00:06.920719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:44.301 [2024-11-06 08:00:06.920772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:44.301 [2024-11-06 08:00:06.920813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.301 [2024-11-06 08:00:06.920947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:44.301 [2024-11-06 08:00:06.920999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:44.301 [2024-11-06 08:00:06.921061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.301 [2024-11-06 08:00:06.921108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:44.301 [2024-11-06 08:00:06.921237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:44.301 [2024-11-06 08:00:06.921302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.301 [2024-11-06 08:00:06.921364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:44.301 [2024-11-06 08:00:06.921380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:44.301 [2024-11-06 08:00:06.921395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.301 [2024-11-06 08:00:06.921405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:44.301 [2024-11-06 08:00:06.921422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:44.301 [2024-11-06 08:00:06.921433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.301 [2024-11-06 08:00:06.921446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:44.301 [2024-11-06 08:00:06.921457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:44.301 [2024-11-06 08:00:06.921470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.301 [2024-11-06 08:00:06.921481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:44.301 [2024-11-06 08:00:06.921494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:44.301 [2024-11-06 08:00:06.921506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.301 [2024-11-06 08:00:06.921520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:44.301 [2024-11-06 08:00:06.921532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:44.301 [2024-11-06 08:00:06.921546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.301 [2024-11-06 08:00:06.921557] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:44.301 [2024-11-06 08:00:06.921572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:44.301 [2024-11-06 08:00:06.921584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.301 [2024-11-06 08:00:06.921601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.301 [2024-11-06 08:00:06.921613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:44.301 [2024-11-06 08:00:06.921630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:44.301 [2024-11-06 08:00:06.921641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:44.301 [2024-11-06 08:00:06.921655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:44.301 [2024-11-06 08:00:06.921667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:44.301 [2024-11-06 08:00:06.921680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:44.301 [2024-11-06 08:00:06.921698] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:44.301 [2024-11-06 08:00:06.921718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.301 [2024-11-06 08:00:06.921732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:44.301 [2024-11-06 08:00:06.921747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:44.301 [2024-11-06 08:00:06.921759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:44.301 [2024-11-06 08:00:06.921774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:44.301 [2024-11-06 08:00:06.921786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:44.301 [2024-11-06 08:00:06.921801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:44.301 [2024-11-06 08:00:06.921814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:44.301 [2024-11-06 08:00:06.921829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:44.301 [2024-11-06 08:00:06.921841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:44.301 [2024-11-06 08:00:06.921859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:44.301 [2024-11-06 08:00:06.921871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:44.301 [2024-11-06 08:00:06.921886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:44.301 [2024-11-06 08:00:06.921898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:44.301 [2024-11-06 08:00:06.921913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:44.301 [2024-11-06 08:00:06.921926] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:44.301 [2024-11-06 08:00:06.921948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.301 [2024-11-06 08:00:06.921963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:44.301 [2024-11-06 08:00:06.921978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:44.301 [2024-11-06 08:00:06.921991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:44.301 [2024-11-06 08:00:06.922006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:44.301 [2024-11-06 08:00:06.922020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.301 [2024-11-06 08:00:06.922036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:44.301 [2024-11-06 08:00:06.922050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:23:44.301 [2024-11-06 08:00:06.922065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.301 [2024-11-06 08:00:06.922137] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:44.301 [2024-11-06 08:00:06.922162] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:47.586 [2024-11-06 08:00:10.081356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.586 [2024-11-06 08:00:10.081811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:47.586 [2024-11-06 08:00:10.081846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3159.231 ms 00:23:47.586 [2024-11-06 08:00:10.081865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.586 [2024-11-06 08:00:10.125559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.586 [2024-11-06 08:00:10.125681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.586 [2024-11-06 08:00:10.125705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.356 ms 00:23:47.586 [2024-11-06 08:00:10.125723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.586 [2024-11-06 08:00:10.125945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.586 [2024-11-06 08:00:10.125971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:47.586 [2024-11-06 08:00:10.125986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:47.586 [2024-11-06 08:00:10.126005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.586 [2024-11-06 08:00:10.172892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.586 [2024-11-06 08:00:10.173369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:47.586 [2024-11-06 08:00:10.173403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.788 ms 00:23:47.586 [2024-11-06 08:00:10.173421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.586 [2024-11-06 08:00:10.173504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.586 [2024-11-06 08:00:10.173526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:47.586 [2024-11-06 08:00:10.173541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:47.586 [2024-11-06 08:00:10.173560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.586 [2024-11-06 08:00:10.174450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.586 [2024-11-06 08:00:10.174484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:47.586 [2024-11-06 08:00:10.174498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:23:47.586 [2024-11-06 08:00:10.174513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.586 [2024-11-06 08:00:10.174719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.586 [2024-11-06 08:00:10.174744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:47.586 [2024-11-06 08:00:10.174758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:23:47.586 [2024-11-06 08:00:10.174776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.586 [2024-11-06 08:00:10.198132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.586 [2024-11-06 08:00:10.198250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:47.586 [2024-11-06 08:00:10.198309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.319 ms 00:23:47.586 [2024-11-06 08:00:10.198339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.849 [2024-11-06 08:00:10.226364] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:47.849 [2024-11-06 08:00:10.232531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.849 [2024-11-06 08:00:10.232598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:47.849 [2024-11-06 08:00:10.232643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.988 ms 00:23:47.849 [2024-11-06 08:00:10.232656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.849 [2024-11-06 08:00:10.314219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.849 [2024-11-06 08:00:10.314344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:47.849 [2024-11-06 08:00:10.314390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.459 ms 00:23:47.849 [2024-11-06 08:00:10.314404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.849 [2024-11-06 08:00:10.314657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.849 [2024-11-06 08:00:10.314676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:47.849 [2024-11-06 08:00:10.314697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:23:47.849 [2024-11-06 08:00:10.314714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.849 [2024-11-06 08:00:10.344139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.849 [2024-11-06 08:00:10.344225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:47.849 [2024-11-06 08:00:10.344283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.328 ms 00:23:47.849 [2024-11-06 08:00:10.344299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.849 [2024-11-06 08:00:10.372346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.849 [2024-11-06 08:00:10.372425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:47.849 [2024-11-06 08:00:10.372468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.989 ms 00:23:47.849 [2024-11-06 08:00:10.372482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.849 [2024-11-06 08:00:10.373418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.849 [2024-11-06 08:00:10.373451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:47.849 [2024-11-06 08:00:10.373470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:23:47.849 [2024-11-06 08:00:10.373483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.849 [2024-11-06 08:00:10.465589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.849 [2024-11-06 08:00:10.465691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:47.849 [2024-11-06 08:00:10.465741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.002 ms 00:23:47.849 [2024-11-06 08:00:10.465755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.121 [2024-11-06 08:00:10.498240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.121 [2024-11-06 08:00:10.498346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:48.121 [2024-11-06 08:00:10.498394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.326 ms 00:23:48.121 [2024-11-06 08:00:10.498408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.121 [2024-11-06 08:00:10.529247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.121 [2024-11-06 08:00:10.529345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:48.121 [2024-11-06 08:00:10.529404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.751 ms 00:23:48.121 [2024-11-06 08:00:10.529417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.121 [2024-11-06 08:00:10.559615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.121 [2024-11-06 08:00:10.559936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:48.121 [2024-11-06 08:00:10.559975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.111 ms 00:23:48.121 [2024-11-06 08:00:10.559991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.121 [2024-11-06 08:00:10.560071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.121 [2024-11-06 08:00:10.560090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:48.121 [2024-11-06 08:00:10.560355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:48.121 [2024-11-06 08:00:10.560372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.121 [2024-11-06 08:00:10.560534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.121 [2024-11-06 08:00:10.560554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:48.121 [2024-11-06 08:00:10.560571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:48.121 [2024-11-06 08:00:10.560588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.121 [2024-11-06 08:00:10.562305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3666.415 ms, result 0 00:23:48.121 { 00:23:48.121 "name": "ftl0", 00:23:48.121 "uuid": "b72774d1-8924-47f6-808c-25def4de7f7d" 00:23:48.121 } 00:23:48.121 08:00:10 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:48.121 08:00:10 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:48.379 08:00:10 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:23:48.379 08:00:10 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:48.638 [2024-11-06 08:00:11.209122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.638 [2024-11-06 08:00:11.209643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:48.638 [2024-11-06 08:00:11.209684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:48.638 [2024-11-06 08:00:11.209726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.638 [2024-11-06 08:00:11.209784] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:48.638 [2024-11-06 08:00:11.214009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.638 [2024-11-06 08:00:11.214196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:48.638 [2024-11-06 08:00:11.214363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.188 ms 00:23:48.638 [2024-11-06 08:00:11.214418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.638 [2024-11-06 08:00:11.215103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.638 [2024-11-06 08:00:11.215150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:48.638 [2024-11-06 08:00:11.215184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:23:48.638 [2024-11-06 08:00:11.215214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.638 [2024-11-06 08:00:11.218378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.638 [2024-11-06 08:00:11.218410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:48.638 [2024-11-06 08:00:11.218445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.136 ms 00:23:48.638 [2024-11-06 08:00:11.218457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.638 [2024-11-06 08:00:11.224441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.638 [2024-11-06 08:00:11.224472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:48.638 [2024-11-06 08:00:11.224507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.955 ms 00:23:48.638 [2024-11-06 08:00:11.224522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.638 [2024-11-06 08:00:11.255801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.638 [2024-11-06 08:00:11.255913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:48.638 [2024-11-06 08:00:11.255958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.187 ms 00:23:48.638 [2024-11-06 08:00:11.255971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.902 [2024-11-06 08:00:11.277803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.902 [2024-11-06 08:00:11.277922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:48.902 [2024-11-06 08:00:11.277967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.683 ms 00:23:48.902 [2024-11-06 08:00:11.277981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.902 [2024-11-06 08:00:11.278351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.902 [2024-11-06 08:00:11.278396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:48.902 [2024-11-06 08:00:11.278430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:23:48.902 [2024-11-06 08:00:11.278443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.902 [2024-11-06 08:00:11.309140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.902 [2024-11-06 08:00:11.309233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:48.902 [2024-11-06 08:00:11.309289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.652 ms 00:23:48.902 [2024-11-06 08:00:11.309305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.902 [2024-11-06 08:00:11.338672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.902 [2024-11-06 08:00:11.338765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:48.902 [2024-11-06 08:00:11.338809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.245 ms 00:23:48.902 [2024-11-06 08:00:11.338821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.902 [2024-11-06 08:00:11.367348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.902 [2024-11-06 08:00:11.367451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:48.902 [2024-11-06 08:00:11.367477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.433 ms 00:23:48.902 [2024-11-06 08:00:11.367490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.902 [2024-11-06 08:00:11.399570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.902 [2024-11-06 08:00:11.399679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:48.902 [2024-11-06 08:00:11.399723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.883 ms 00:23:48.902 [2024-11-06 08:00:11.399736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.902 [2024-11-06 08:00:11.399864] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:48.902 [2024-11-06 08:00:11.399892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.399911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.399924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.399940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.399953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.399969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.399981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:48.902 [2024-11-06 08:00:11.400594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.400993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:48.903 [2024-11-06 08:00:11.401865] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:48.903 [2024-11-06 08:00:11.401881] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b72774d1-8924-47f6-808c-25def4de7f7d 00:23:48.903 [2024-11-06 08:00:11.401895] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:48.903 [2024-11-06 08:00:11.401917] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:48.903 [2024-11-06 08:00:11.401930] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:48.903 [2024-11-06 08:00:11.401946] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:48.903 [2024-11-06 08:00:11.401962] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:48.903 [2024-11-06 08:00:11.401977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:48.903 [2024-11-06 08:00:11.401990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:48.903 [2024-11-06 08:00:11.402003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:48.903 [2024-11-06 08:00:11.402014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:48.903 [2024-11-06 08:00:11.402030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.903 [2024-11-06 08:00:11.402043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:48.904 [2024-11-06 08:00:11.402060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:23:48.904 [2024-11-06 08:00:11.402073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.904 [2024-11-06 08:00:11.420878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.904 [2024-11-06 08:00:11.421257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:48.904 [2024-11-06 08:00:11.421315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.677 ms 00:23:48.904 [2024-11-06 08:00:11.421330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.904 [2024-11-06 08:00:11.421921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.904 [2024-11-06 08:00:11.421944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:48.904 [2024-11-06 08:00:11.421980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:23:48.904 [2024-11-06 08:00:11.421992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.904 [2024-11-06 08:00:11.477168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.904 [2024-11-06 08:00:11.477280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.904 [2024-11-06 08:00:11.477308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.904 [2024-11-06 08:00:11.477321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.904 [2024-11-06 08:00:11.477440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.904 [2024-11-06 08:00:11.477457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.904 [2024-11-06 08:00:11.477474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.904 [2024-11-06 08:00:11.477486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.904 [2024-11-06 08:00:11.477683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.904 [2024-11-06 08:00:11.477714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.904 [2024-11-06 08:00:11.477732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.904 [2024-11-06 08:00:11.477745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.904 [2024-11-06 08:00:11.477783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.904 [2024-11-06 08:00:11.477798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.904 [2024-11-06 08:00:11.477814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.904 [2024-11-06 08:00:11.477826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.162 [2024-11-06 08:00:11.593254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.162 [2024-11-06 08:00:11.593340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.162 [2024-11-06 08:00:11.593388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.162 [2024-11-06 08:00:11.593401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.163 [2024-11-06 08:00:11.682469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.163 [2024-11-06 08:00:11.682574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.163 [2024-11-06 08:00:11.682618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.163 [2024-11-06 08:00:11.682632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.163 [2024-11-06 08:00:11.682813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.163 [2024-11-06 08:00:11.682837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:49.163 [2024-11-06 08:00:11.682854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.163 [2024-11-06 08:00:11.682866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.163 [2024-11-06 08:00:11.682949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.163 [2024-11-06 08:00:11.682967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:49.163 [2024-11-06 08:00:11.682983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.163 [2024-11-06 08:00:11.682995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.163 [2024-11-06 08:00:11.683139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.163 [2024-11-06 08:00:11.683162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:49.163 [2024-11-06 08:00:11.683179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.163 [2024-11-06 08:00:11.683191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.163 [2024-11-06 08:00:11.683251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.163 [2024-11-06 08:00:11.683288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:49.163 [2024-11-06 08:00:11.683317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.163 [2024-11-06 08:00:11.683330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.163 [2024-11-06 08:00:11.683394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.163 [2024-11-06 08:00:11.683411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:49.163 [2024-11-06 08:00:11.683430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.163 [2024-11-06 08:00:11.683442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.163 [2024-11-06 08:00:11.683517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.163 [2024-11-06 08:00:11.683535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:49.163 [2024-11-06 08:00:11.683551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.163 [2024-11-06 08:00:11.683563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.163 [2024-11-06 08:00:11.683767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.608 ms, result 0 00:23:49.163 true 00:23:49.163 08:00:11 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76772 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76772 ']' 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76772 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76772 00:23:49.163 killing process with pid 76772 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76772' 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 76772 00:23:49.163 08:00:11 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 76772 00:23:54.431 08:00:16 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:59.701 262144+0 records in 00:23:59.701 262144+0 records out 00:23:59.701 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.00485 s, 215 MB/s 00:23:59.701 08:00:21 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:01.770 08:00:24 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:01.770 [2024-11-06 08:00:24.151468] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:24:01.770 [2024-11-06 08:00:24.151673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77036 ] 00:24:01.770 [2024-11-06 08:00:24.349809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.031 [2024-11-06 08:00:24.526501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.631 [2024-11-06 08:00:24.935181] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.631 [2024-11-06 08:00:24.935632] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.631 [2024-11-06 08:00:25.107934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.108354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:02.631 [2024-11-06 08:00:25.108397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:02.631 [2024-11-06 08:00:25.108412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.108506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.108526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.631 [2024-11-06 08:00:25.108544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:02.631 [2024-11-06 08:00:25.108556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.108590] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:02.631 [2024-11-06 08:00:25.109569] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:02.631 [2024-11-06 08:00:25.109605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.109619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.631 [2024-11-06 08:00:25.109633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:24:02.631 [2024-11-06 08:00:25.109645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.112293] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:02.631 [2024-11-06 08:00:25.129013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.129096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:02.631 [2024-11-06 08:00:25.129116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.722 ms 00:24:02.631 [2024-11-06 08:00:25.129129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.129223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.129247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:02.631 [2024-11-06 08:00:25.129274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:02.631 [2024-11-06 08:00:25.129286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.141100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.141185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.631 [2024-11-06 08:00:25.141204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.699 ms 00:24:02.631 [2024-11-06 08:00:25.141216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.141371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.141394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.631 [2024-11-06 08:00:25.141407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:02.631 [2024-11-06 08:00:25.141420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.141544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.141564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:02.631 [2024-11-06 08:00:25.141577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:02.631 [2024-11-06 08:00:25.141588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.141628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:02.631 [2024-11-06 08:00:25.147190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.147241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.631 [2024-11-06 08:00:25.147257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.575 ms 00:24:02.631 [2024-11-06 08:00:25.147289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.147334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.147349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:02.631 [2024-11-06 08:00:25.147362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:02.631 [2024-11-06 08:00:25.147373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.147429] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:02.631 [2024-11-06 08:00:25.147465] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:02.631 [2024-11-06 08:00:25.147512] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:02.631 [2024-11-06 08:00:25.147537] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:02.631 [2024-11-06 08:00:25.147652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:02.631 [2024-11-06 08:00:25.147668] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:02.631 [2024-11-06 08:00:25.147684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:02.631 [2024-11-06 08:00:25.147699] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:02.631 [2024-11-06 08:00:25.147712] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:02.631 [2024-11-06 08:00:25.147725] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:02.631 [2024-11-06 08:00:25.147736] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:02.631 [2024-11-06 08:00:25.147749] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:02.631 [2024-11-06 08:00:25.147761] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:02.631 [2024-11-06 08:00:25.147778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.147790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:02.631 [2024-11-06 08:00:25.147802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:24:02.631 [2024-11-06 08:00:25.147814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.147911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.631 [2024-11-06 08:00:25.147928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:02.631 [2024-11-06 08:00:25.147940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:02.631 [2024-11-06 08:00:25.147951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.631 [2024-11-06 08:00:25.148075] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:02.631 [2024-11-06 08:00:25.148101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:02.631 [2024-11-06 08:00:25.148113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.631 [2024-11-06 08:00:25.148125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:02.631 [2024-11-06 08:00:25.148147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:02.631 [2024-11-06 08:00:25.148169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:02.631 [2024-11-06 08:00:25.148180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.631 [2024-11-06 08:00:25.148202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:02.631 [2024-11-06 08:00:25.148212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:02.631 [2024-11-06 08:00:25.148222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:02.631 [2024-11-06 08:00:25.148232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:02.631 [2024-11-06 08:00:25.148243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:02.631 [2024-11-06 08:00:25.148285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:02.631 [2024-11-06 08:00:25.148308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:02.631 [2024-11-06 08:00:25.148319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:02.631 [2024-11-06 08:00:25.148340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.631 [2024-11-06 08:00:25.148360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:02.631 [2024-11-06 08:00:25.148371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.631 [2024-11-06 08:00:25.148392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:02.631 [2024-11-06 08:00:25.148403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.631 [2024-11-06 08:00:25.148424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:02.631 [2024-11-06 08:00:25.148434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:02.631 [2024-11-06 08:00:25.148455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:02.631 [2024-11-06 08:00:25.148466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.631 [2024-11-06 08:00:25.148486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:02.631 [2024-11-06 08:00:25.148496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:02.631 [2024-11-06 08:00:25.148507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:02.631 [2024-11-06 08:00:25.148517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:02.631 [2024-11-06 08:00:25.148527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:02.631 [2024-11-06 08:00:25.148537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:02.631 [2024-11-06 08:00:25.148558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:02.631 [2024-11-06 08:00:25.148570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148580] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:02.631 [2024-11-06 08:00:25.148592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:02.631 [2024-11-06 08:00:25.148603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:02.631 [2024-11-06 08:00:25.148614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:02.631 [2024-11-06 08:00:25.148629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:02.631 [2024-11-06 08:00:25.148641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:02.631 [2024-11-06 08:00:25.148652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:02.631 [2024-11-06 08:00:25.148662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:02.631 [2024-11-06 08:00:25.148672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:02.631 [2024-11-06 08:00:25.148683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:02.632 [2024-11-06 08:00:25.148695] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:02.632 [2024-11-06 08:00:25.148710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.632 [2024-11-06 08:00:25.148723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:02.632 [2024-11-06 08:00:25.148734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:02.632 [2024-11-06 08:00:25.148746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:02.632 [2024-11-06 08:00:25.148757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:02.632 [2024-11-06 08:00:25.148768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:02.632 [2024-11-06 08:00:25.148780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:02.632 [2024-11-06 08:00:25.148791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:02.632 [2024-11-06 08:00:25.148802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:02.632 [2024-11-06 08:00:25.148813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:02.632 [2024-11-06 08:00:25.148824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:02.632 [2024-11-06 08:00:25.148836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:02.632 [2024-11-06 08:00:25.148847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:02.632 [2024-11-06 08:00:25.148858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:02.632 [2024-11-06 08:00:25.148870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:02.632 [2024-11-06 08:00:25.148881] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:02.632 [2024-11-06 08:00:25.148893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:02.632 [2024-11-06 08:00:25.148913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:02.632 [2024-11-06 08:00:25.148925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:02.632 [2024-11-06 08:00:25.148937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:02.632 [2024-11-06 08:00:25.148948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:02.632 [2024-11-06 08:00:25.148961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.632 [2024-11-06 08:00:25.148973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:02.632 [2024-11-06 08:00:25.148985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:24:02.632 [2024-11-06 08:00:25.148997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.632 [2024-11-06 08:00:25.194660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.632 [2024-11-06 08:00:25.194762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.632 [2024-11-06 08:00:25.194783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.578 ms 00:24:02.632 [2024-11-06 08:00:25.194796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.632 [2024-11-06 08:00:25.194936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.632 [2024-11-06 08:00:25.194965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:02.632 [2024-11-06 08:00:25.194979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:02.632 [2024-11-06 08:00:25.194991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.632 [2024-11-06 08:00:25.255425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.632 [2024-11-06 08:00:25.255529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.632 [2024-11-06 08:00:25.255551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.318 ms 00:24:02.632 [2024-11-06 08:00:25.255564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.632 [2024-11-06 08:00:25.255669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.632 [2024-11-06 08:00:25.255688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.632 [2024-11-06 08:00:25.255703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:02.632 [2024-11-06 08:00:25.255721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.632 [2024-11-06 08:00:25.256652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.632 [2024-11-06 08:00:25.256679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.632 [2024-11-06 08:00:25.256709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:24:02.632 [2024-11-06 08:00:25.256720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.632 [2024-11-06 08:00:25.256915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.632 [2024-11-06 08:00:25.256935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.632 [2024-11-06 08:00:25.256947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:24:02.632 [2024-11-06 08:00:25.256959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.280508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.280610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.891 [2024-11-06 08:00:25.280632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.509 ms 00:24:02.891 [2024-11-06 08:00:25.280667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.298314] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:02.891 [2024-11-06 08:00:25.298416] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:02.891 [2024-11-06 08:00:25.298438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.298453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:02.891 [2024-11-06 08:00:25.298470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.537 ms 00:24:02.891 [2024-11-06 08:00:25.298483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.327559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.327684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:02.891 [2024-11-06 08:00:25.327739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.993 ms 00:24:02.891 [2024-11-06 08:00:25.327752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.344681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.344794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:02.891 [2024-11-06 08:00:25.344815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.787 ms 00:24:02.891 [2024-11-06 08:00:25.344828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.360463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.360542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:02.891 [2024-11-06 08:00:25.360563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.558 ms 00:24:02.891 [2024-11-06 08:00:25.360576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.361768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.361806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:02.891 [2024-11-06 08:00:25.361822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:24:02.891 [2024-11-06 08:00:25.361834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.465332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.465421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:02.891 [2024-11-06 08:00:25.465449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.464 ms 00:24:02.891 [2024-11-06 08:00:25.465465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.484913] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:02.891 [2024-11-06 08:00:25.490923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.490971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:02.891 [2024-11-06 08:00:25.490995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.227 ms 00:24:02.891 [2024-11-06 08:00:25.491011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.491267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.491296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:02.891 [2024-11-06 08:00:25.491314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:02.891 [2024-11-06 08:00:25.491330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.491484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.491533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:02.891 [2024-11-06 08:00:25.491551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:24:02.891 [2024-11-06 08:00:25.491566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.491609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.491630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:02.891 [2024-11-06 08:00:25.491646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:02.891 [2024-11-06 08:00:25.491661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.891 [2024-11-06 08:00:25.491728] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:02.891 [2024-11-06 08:00:25.491752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.891 [2024-11-06 08:00:25.491778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:02.891 [2024-11-06 08:00:25.491794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:02.891 [2024-11-06 08:00:25.491809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.150 [2024-11-06 08:00:25.532417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.150 [2024-11-06 08:00:25.532506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:03.150 [2024-11-06 08:00:25.532533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.565 ms 00:24:03.150 [2024-11-06 08:00:25.532548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.150 [2024-11-06 08:00:25.532719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.150 [2024-11-06 08:00:25.532744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:03.150 [2024-11-06 08:00:25.532761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:24:03.150 [2024-11-06 08:00:25.532776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.150 [2024-11-06 08:00:25.534758] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.078 ms, result 0 00:24:04.085  [2024-11-06T08:00:27.647Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-06T08:00:28.583Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-06T08:00:29.958Z] Copying: 68/1024 [MB] (22 MBps) [2024-11-06T08:00:30.897Z] Copying: 91/1024 [MB] (22 MBps) [2024-11-06T08:00:31.838Z] Copying: 114/1024 [MB] (22 MBps) [2024-11-06T08:00:32.773Z] Copying: 136/1024 [MB] (22 MBps) [2024-11-06T08:00:33.708Z] Copying: 159/1024 [MB] (22 MBps) [2024-11-06T08:00:34.643Z] Copying: 182/1024 [MB] (22 MBps) [2024-11-06T08:00:35.577Z] Copying: 205/1024 [MB] (22 MBps) [2024-11-06T08:00:36.951Z] Copying: 227/1024 [MB] (22 MBps) [2024-11-06T08:00:37.884Z] Copying: 250/1024 [MB] (23 MBps) [2024-11-06T08:00:38.821Z] Copying: 274/1024 [MB] (23 MBps) [2024-11-06T08:00:39.756Z] Copying: 297/1024 [MB] (23 MBps) [2024-11-06T08:00:40.690Z] Copying: 320/1024 [MB] (23 MBps) [2024-11-06T08:00:41.624Z] Copying: 344/1024 [MB] (23 MBps) [2024-11-06T08:00:42.556Z] Copying: 367/1024 [MB] (23 MBps) [2024-11-06T08:00:43.929Z] Copying: 390/1024 [MB] (23 MBps) [2024-11-06T08:00:44.865Z] Copying: 414/1024 [MB] (23 MBps) [2024-11-06T08:00:45.798Z] Copying: 436/1024 [MB] (22 MBps) [2024-11-06T08:00:46.732Z] Copying: 460/1024 [MB] (23 MBps) [2024-11-06T08:00:47.666Z] Copying: 483/1024 [MB] (23 MBps) [2024-11-06T08:00:48.601Z] Copying: 507/1024 [MB] (23 MBps) [2024-11-06T08:00:49.976Z] Copying: 531/1024 [MB] (24 MBps) [2024-11-06T08:00:50.910Z] Copying: 555/1024 [MB] (24 MBps) [2024-11-06T08:00:51.844Z] Copying: 579/1024 [MB] (24 MBps) [2024-11-06T08:00:52.776Z] Copying: 604/1024 [MB] (24 MBps) [2024-11-06T08:00:53.709Z] Copying: 627/1024 [MB] (23 MBps) [2024-11-06T08:00:54.644Z] Copying: 651/1024 [MB] (24 MBps) [2024-11-06T08:00:55.597Z] Copying: 676/1024 [MB] (24 MBps) [2024-11-06T08:00:56.970Z] Copying: 701/1024 [MB] (25 MBps) [2024-11-06T08:00:57.905Z] Copying: 725/1024 [MB] (24 MBps) [2024-11-06T08:00:58.839Z] Copying: 748/1024 [MB] (23 MBps) [2024-11-06T08:00:59.772Z] Copying: 773/1024 [MB] (24 MBps) [2024-11-06T08:01:00.706Z] Copying: 797/1024 [MB] (23 MBps) [2024-11-06T08:01:01.640Z] Copying: 820/1024 [MB] (23 MBps) [2024-11-06T08:01:02.591Z] Copying: 844/1024 [MB] (23 MBps) [2024-11-06T08:01:03.967Z] Copying: 869/1024 [MB] (24 MBps) [2024-11-06T08:01:04.903Z] Copying: 894/1024 [MB] (25 MBps) [2024-11-06T08:01:05.839Z] Copying: 919/1024 [MB] (24 MBps) [2024-11-06T08:01:06.773Z] Copying: 943/1024 [MB] (24 MBps) [2024-11-06T08:01:07.710Z] Copying: 968/1024 [MB] (24 MBps) [2024-11-06T08:01:08.705Z] Copying: 993/1024 [MB] (25 MBps) [2024-11-06T08:01:08.964Z] Copying: 1019/1024 [MB] (25 MBps) [2024-11-06T08:01:08.964Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-06 08:01:08.744034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.744111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:46.335 [2024-11-06 08:01:08.744133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:46.335 [2024-11-06 08:01:08.744146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.744177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.335 [2024-11-06 08:01:08.747934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.748004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:46.335 [2024-11-06 08:01:08.748019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.733 ms 00:24:46.335 [2024-11-06 08:01:08.748031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.749686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.749760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:46.335 [2024-11-06 08:01:08.749792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:24:46.335 [2024-11-06 08:01:08.749804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.767092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.767171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:46.335 [2024-11-06 08:01:08.767191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.262 ms 00:24:46.335 [2024-11-06 08:01:08.767203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.774150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.774198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:46.335 [2024-11-06 08:01:08.774213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:24:46.335 [2024-11-06 08:01:08.774224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.808354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.808423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:46.335 [2024-11-06 08:01:08.808443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.991 ms 00:24:46.335 [2024-11-06 08:01:08.808455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.828250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.828362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:46.335 [2024-11-06 08:01:08.828400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.729 ms 00:24:46.335 [2024-11-06 08:01:08.828412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.828592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.828613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:46.335 [2024-11-06 08:01:08.828638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:24:46.335 [2024-11-06 08:01:08.828655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.861913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.862042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:46.335 [2024-11-06 08:01:08.862062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.234 ms 00:24:46.335 [2024-11-06 08:01:08.862075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.895403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.895472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:46.335 [2024-11-06 08:01:08.895509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.241 ms 00:24:46.335 [2024-11-06 08:01:08.895522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.335 [2024-11-06 08:01:08.928298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.335 [2024-11-06 08:01:08.928395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:46.336 [2024-11-06 08:01:08.928416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.707 ms 00:24:46.336 [2024-11-06 08:01:08.928427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.336 [2024-11-06 08:01:08.961549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.336 [2024-11-06 08:01:08.961618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:46.336 [2024-11-06 08:01:08.961638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.986 ms 00:24:46.336 [2024-11-06 08:01:08.961650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.336 [2024-11-06 08:01:08.961713] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:46.336 [2024-11-06 08:01:08.961751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.961998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:46.336 [2024-11-06 08:01:08.962519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:46.595 [2024-11-06 08:01:08.962973] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:46.595 [2024-11-06 08:01:08.962993] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b72774d1-8924-47f6-808c-25def4de7f7d 00:24:46.595 [2024-11-06 08:01:08.963005] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:46.595 [2024-11-06 08:01:08.963038] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:46.595 [2024-11-06 08:01:08.963049] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:46.595 [2024-11-06 08:01:08.963061] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:46.595 [2024-11-06 08:01:08.963073] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:46.596 [2024-11-06 08:01:08.963085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:46.596 [2024-11-06 08:01:08.963096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:46.596 [2024-11-06 08:01:08.963120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:46.596 [2024-11-06 08:01:08.963131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:46.596 [2024-11-06 08:01:08.963142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.596 [2024-11-06 08:01:08.963154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:46.596 [2024-11-06 08:01:08.963166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:24:46.596 [2024-11-06 08:01:08.963178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-11-06 08:01:08.981332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.596 [2024-11-06 08:01:08.981389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:46.596 [2024-11-06 08:01:08.981408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.102 ms 00:24:46.596 [2024-11-06 08:01:08.981419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-11-06 08:01:08.981928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.596 [2024-11-06 08:01:08.981953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:46.596 [2024-11-06 08:01:08.981966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:24:46.596 [2024-11-06 08:01:08.981978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-11-06 08:01:09.029719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.596 [2024-11-06 08:01:09.029783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.596 [2024-11-06 08:01:09.029803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.596 [2024-11-06 08:01:09.029815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-11-06 08:01:09.029906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.596 [2024-11-06 08:01:09.029922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.596 [2024-11-06 08:01:09.029934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.596 [2024-11-06 08:01:09.029945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-11-06 08:01:09.030087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.596 [2024-11-06 08:01:09.030108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.596 [2024-11-06 08:01:09.030122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.596 [2024-11-06 08:01:09.030133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-11-06 08:01:09.030156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.596 [2024-11-06 08:01:09.030170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.596 [2024-11-06 08:01:09.030183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.596 [2024-11-06 08:01:09.030203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.596 [2024-11-06 08:01:09.151068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.596 [2024-11-06 08:01:09.151136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.596 [2024-11-06 08:01:09.151156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.596 [2024-11-06 08:01:09.151168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.855 [2024-11-06 08:01:09.245227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.855 [2024-11-06 08:01:09.245315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.855 [2024-11-06 08:01:09.245336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.855 [2024-11-06 08:01:09.245348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.855 [2024-11-06 08:01:09.245471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.855 [2024-11-06 08:01:09.245496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.855 [2024-11-06 08:01:09.245509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.855 [2024-11-06 08:01:09.245520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.855 [2024-11-06 08:01:09.245570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.855 [2024-11-06 08:01:09.245587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.855 [2024-11-06 08:01:09.245600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.855 [2024-11-06 08:01:09.245612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.855 [2024-11-06 08:01:09.245747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.855 [2024-11-06 08:01:09.245767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.855 [2024-11-06 08:01:09.245787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.855 [2024-11-06 08:01:09.245798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.855 [2024-11-06 08:01:09.245849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.855 [2024-11-06 08:01:09.245868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:46.855 [2024-11-06 08:01:09.245881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.855 [2024-11-06 08:01:09.245892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.855 [2024-11-06 08:01:09.245950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.855 [2024-11-06 08:01:09.245965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.855 [2024-11-06 08:01:09.245985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.855 [2024-11-06 08:01:09.245996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.855 [2024-11-06 08:01:09.246050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.855 [2024-11-06 08:01:09.246067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.855 [2024-11-06 08:01:09.246078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.855 [2024-11-06 08:01:09.246089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.855 [2024-11-06 08:01:09.246268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 502.167 ms, result 0 00:24:48.231 00:24:48.231 00:24:48.231 08:01:10 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:48.231 [2024-11-06 08:01:10.627398] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:24:48.231 [2024-11-06 08:01:10.627656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77502 ] 00:24:48.231 [2024-11-06 08:01:10.825429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.490 [2024-11-06 08:01:10.969442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.750 [2024-11-06 08:01:11.349945] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:48.750 [2024-11-06 08:01:11.350057] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.011 [2024-11-06 08:01:11.515596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.515702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.011 [2024-11-06 08:01:11.515746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:49.011 [2024-11-06 08:01:11.515759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.515834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.515853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.011 [2024-11-06 08:01:11.515871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:49.011 [2024-11-06 08:01:11.515883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.515914] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.011 [2024-11-06 08:01:11.516919] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.011 [2024-11-06 08:01:11.516962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.516992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.011 [2024-11-06 08:01:11.517006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:24:49.011 [2024-11-06 08:01:11.517017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.519144] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:49.011 [2024-11-06 08:01:11.536366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.536425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:49.011 [2024-11-06 08:01:11.536446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.221 ms 00:24:49.011 [2024-11-06 08:01:11.536459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.536570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.536595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:49.011 [2024-11-06 08:01:11.536609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:49.011 [2024-11-06 08:01:11.536636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.545899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.545961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.011 [2024-11-06 08:01:11.545996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.138 ms 00:24:49.011 [2024-11-06 08:01:11.546008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.546134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.546163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.011 [2024-11-06 08:01:11.546176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:49.011 [2024-11-06 08:01:11.546187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.546295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.546316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.011 [2024-11-06 08:01:11.546330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:49.011 [2024-11-06 08:01:11.546341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.546383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.011 [2024-11-06 08:01:11.551511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.551549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.011 [2024-11-06 08:01:11.551566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.140 ms 00:24:49.011 [2024-11-06 08:01:11.551583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.551630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.551659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.011 [2024-11-06 08:01:11.551672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:49.011 [2024-11-06 08:01:11.551684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.551788] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:49.011 [2024-11-06 08:01:11.551822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:49.011 [2024-11-06 08:01:11.551867] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:49.011 [2024-11-06 08:01:11.551891] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:49.011 [2024-11-06 08:01:11.552004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:49.011 [2024-11-06 08:01:11.552031] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.011 [2024-11-06 08:01:11.552047] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:49.011 [2024-11-06 08:01:11.552063] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.011 [2024-11-06 08:01:11.552077] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.011 [2024-11-06 08:01:11.552090] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:49.011 [2024-11-06 08:01:11.552102] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.011 [2024-11-06 08:01:11.552114] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:49.011 [2024-11-06 08:01:11.552125] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:49.011 [2024-11-06 08:01:11.552143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.552156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.011 [2024-11-06 08:01:11.552169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:24:49.011 [2024-11-06 08:01:11.552180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.552309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.011 [2024-11-06 08:01:11.552334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.011 [2024-11-06 08:01:11.552349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:24:49.011 [2024-11-06 08:01:11.552360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.011 [2024-11-06 08:01:11.552485] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.011 [2024-11-06 08:01:11.552516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.011 [2024-11-06 08:01:11.552529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.011 [2024-11-06 08:01:11.552542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.011 [2024-11-06 08:01:11.552553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.011 [2024-11-06 08:01:11.552564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.011 [2024-11-06 08:01:11.552575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:49.011 [2024-11-06 08:01:11.552585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.011 [2024-11-06 08:01:11.552596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:49.011 [2024-11-06 08:01:11.552607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.011 [2024-11-06 08:01:11.552617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.011 [2024-11-06 08:01:11.552628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:49.011 [2024-11-06 08:01:11.552638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.012 [2024-11-06 08:01:11.552648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.012 [2024-11-06 08:01:11.552660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:49.012 [2024-11-06 08:01:11.552683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.012 [2024-11-06 08:01:11.552705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:49.012 [2024-11-06 08:01:11.552716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.012 [2024-11-06 08:01:11.552740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.012 [2024-11-06 08:01:11.552762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.012 [2024-11-06 08:01:11.552773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.012 [2024-11-06 08:01:11.552795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.012 [2024-11-06 08:01:11.552805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.012 [2024-11-06 08:01:11.552827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.012 [2024-11-06 08:01:11.552838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.012 [2024-11-06 08:01:11.552860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.012 [2024-11-06 08:01:11.552871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.012 [2024-11-06 08:01:11.552893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.012 [2024-11-06 08:01:11.552905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:49.012 [2024-11-06 08:01:11.552916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.012 [2024-11-06 08:01:11.552926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:49.012 [2024-11-06 08:01:11.552937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:49.012 [2024-11-06 08:01:11.552947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:49.012 [2024-11-06 08:01:11.552969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:49.012 [2024-11-06 08:01:11.552980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.012 [2024-11-06 08:01:11.552990] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.012 [2024-11-06 08:01:11.553002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.012 [2024-11-06 08:01:11.553013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.012 [2024-11-06 08:01:11.553024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.012 [2024-11-06 08:01:11.553036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.012 [2024-11-06 08:01:11.553075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.012 [2024-11-06 08:01:11.553086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.012 [2024-11-06 08:01:11.553097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.012 [2024-11-06 08:01:11.553108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.012 [2024-11-06 08:01:11.553120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.012 [2024-11-06 08:01:11.553133] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.012 [2024-11-06 08:01:11.553148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.012 [2024-11-06 08:01:11.553162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:49.012 [2024-11-06 08:01:11.553173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:49.012 [2024-11-06 08:01:11.553185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:49.012 [2024-11-06 08:01:11.553197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:49.012 [2024-11-06 08:01:11.553209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:49.012 [2024-11-06 08:01:11.553220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:49.012 [2024-11-06 08:01:11.553232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:49.012 [2024-11-06 08:01:11.553243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:49.012 [2024-11-06 08:01:11.553271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:49.012 [2024-11-06 08:01:11.553284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:49.012 [2024-11-06 08:01:11.553296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:49.012 [2024-11-06 08:01:11.553308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:49.012 [2024-11-06 08:01:11.553320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:49.012 [2024-11-06 08:01:11.553331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:49.012 [2024-11-06 08:01:11.553343] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.012 [2024-11-06 08:01:11.553359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.012 [2024-11-06 08:01:11.553378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.012 [2024-11-06 08:01:11.553391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.012 [2024-11-06 08:01:11.553402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.012 [2024-11-06 08:01:11.553414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.012 [2024-11-06 08:01:11.553427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.012 [2024-11-06 08:01:11.553439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.012 [2024-11-06 08:01:11.553452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:24:49.012 [2024-11-06 08:01:11.553464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.012 [2024-11-06 08:01:11.593685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.012 [2024-11-06 08:01:11.593750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.012 [2024-11-06 08:01:11.593788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.151 ms 00:24:49.012 [2024-11-06 08:01:11.593801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.012 [2024-11-06 08:01:11.593928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.012 [2024-11-06 08:01:11.593950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.012 [2024-11-06 08:01:11.593964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:49.012 [2024-11-06 08:01:11.593975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.651043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.651108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.274 [2024-11-06 08:01:11.651146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.951 ms 00:24:49.274 [2024-11-06 08:01:11.651159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.651239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.651259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.274 [2024-11-06 08:01:11.651305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:49.274 [2024-11-06 08:01:11.651324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.651998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.652029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.274 [2024-11-06 08:01:11.652045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:24:49.274 [2024-11-06 08:01:11.652072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.652263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.652304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.274 [2024-11-06 08:01:11.652321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:24:49.274 [2024-11-06 08:01:11.652333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.671935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.671996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.274 [2024-11-06 08:01:11.672033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.561 ms 00:24:49.274 [2024-11-06 08:01:11.672051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.688495] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:49.274 [2024-11-06 08:01:11.688551] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:49.274 [2024-11-06 08:01:11.688590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.688604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:49.274 [2024-11-06 08:01:11.688619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.333 ms 00:24:49.274 [2024-11-06 08:01:11.688631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.717257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.717347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:49.274 [2024-11-06 08:01:11.717368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.453 ms 00:24:49.274 [2024-11-06 08:01:11.717382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.733504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.733563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:49.274 [2024-11-06 08:01:11.733584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.063 ms 00:24:49.274 [2024-11-06 08:01:11.733597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.748683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.748736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:49.274 [2024-11-06 08:01:11.748755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.006 ms 00:24:49.274 [2024-11-06 08:01:11.748767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.749789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.749826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:49.274 [2024-11-06 08:01:11.749859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:24:49.274 [2024-11-06 08:01:11.749876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.824876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.824958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:49.274 [2024-11-06 08:01:11.824997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.970 ms 00:24:49.274 [2024-11-06 08:01:11.825021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.839675] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:49.274 [2024-11-06 08:01:11.844078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.844115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:49.274 [2024-11-06 08:01:11.844151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.931 ms 00:24:49.274 [2024-11-06 08:01:11.844163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.844308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.844339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:49.274 [2024-11-06 08:01:11.844355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:49.274 [2024-11-06 08:01:11.844367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.844506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.844540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:49.274 [2024-11-06 08:01:11.844556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:49.274 [2024-11-06 08:01:11.844568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.844616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.844633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:49.274 [2024-11-06 08:01:11.844646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:49.274 [2024-11-06 08:01:11.844658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.844711] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:49.274 [2024-11-06 08:01:11.844733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.844745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:49.274 [2024-11-06 08:01:11.844757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:49.274 [2024-11-06 08:01:11.844770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.274 [2024-11-06 08:01:11.877144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.274 [2024-11-06 08:01:11.877216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:49.274 [2024-11-06 08:01:11.877237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.342 ms 00:24:49.275 [2024-11-06 08:01:11.877269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.275 [2024-11-06 08:01:11.877389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.275 [2024-11-06 08:01:11.877410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:49.275 [2024-11-06 08:01:11.877423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:24:49.275 [2024-11-06 08:01:11.877435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.275 [2024-11-06 08:01:11.878844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 362.691 ms, result 0 00:24:50.660  [2024-11-06T08:01:14.227Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-06T08:01:15.191Z] Copying: 48/1024 [MB] (24 MBps) [2024-11-06T08:01:16.155Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-06T08:01:17.090Z] Copying: 97/1024 [MB] (23 MBps) [2024-11-06T08:01:18.465Z] Copying: 120/1024 [MB] (23 MBps) [2024-11-06T08:01:19.399Z] Copying: 144/1024 [MB] (24 MBps) [2024-11-06T08:01:20.332Z] Copying: 168/1024 [MB] (24 MBps) [2024-11-06T08:01:21.266Z] Copying: 193/1024 [MB] (25 MBps) [2024-11-06T08:01:22.199Z] Copying: 219/1024 [MB] (25 MBps) [2024-11-06T08:01:23.147Z] Copying: 244/1024 [MB] (25 MBps) [2024-11-06T08:01:24.108Z] Copying: 269/1024 [MB] (24 MBps) [2024-11-06T08:01:25.483Z] Copying: 293/1024 [MB] (24 MBps) [2024-11-06T08:01:26.418Z] Copying: 318/1024 [MB] (24 MBps) [2024-11-06T08:01:27.352Z] Copying: 343/1024 [MB] (24 MBps) [2024-11-06T08:01:28.288Z] Copying: 368/1024 [MB] (25 MBps) [2024-11-06T08:01:29.224Z] Copying: 394/1024 [MB] (25 MBps) [2024-11-06T08:01:30.159Z] Copying: 419/1024 [MB] (25 MBps) [2024-11-06T08:01:31.096Z] Copying: 444/1024 [MB] (25 MBps) [2024-11-06T08:01:32.472Z] Copying: 469/1024 [MB] (24 MBps) [2024-11-06T08:01:33.407Z] Copying: 494/1024 [MB] (24 MBps) [2024-11-06T08:01:34.344Z] Copying: 519/1024 [MB] (24 MBps) [2024-11-06T08:01:35.277Z] Copying: 544/1024 [MB] (25 MBps) [2024-11-06T08:01:36.211Z] Copying: 570/1024 [MB] (25 MBps) [2024-11-06T08:01:37.144Z] Copying: 594/1024 [MB] (24 MBps) [2024-11-06T08:01:38.517Z] Copying: 620/1024 [MB] (25 MBps) [2024-11-06T08:01:39.083Z] Copying: 645/1024 [MB] (24 MBps) [2024-11-06T08:01:40.457Z] Copying: 669/1024 [MB] (24 MBps) [2024-11-06T08:01:41.390Z] Copying: 694/1024 [MB] (24 MBps) [2024-11-06T08:01:42.325Z] Copying: 718/1024 [MB] (24 MBps) [2024-11-06T08:01:43.260Z] Copying: 743/1024 [MB] (24 MBps) [2024-11-06T08:01:44.194Z] Copying: 766/1024 [MB] (23 MBps) [2024-11-06T08:01:45.143Z] Copying: 790/1024 [MB] (23 MBps) [2024-11-06T08:01:46.518Z] Copying: 814/1024 [MB] (23 MBps) [2024-11-06T08:01:47.083Z] Copying: 838/1024 [MB] (23 MBps) [2024-11-06T08:01:48.459Z] Copying: 862/1024 [MB] (23 MBps) [2024-11-06T08:01:49.396Z] Copying: 886/1024 [MB] (24 MBps) [2024-11-06T08:01:50.331Z] Copying: 909/1024 [MB] (23 MBps) [2024-11-06T08:01:51.267Z] Copying: 934/1024 [MB] (24 MBps) [2024-11-06T08:01:52.202Z] Copying: 958/1024 [MB] (23 MBps) [2024-11-06T08:01:53.138Z] Copying: 981/1024 [MB] (23 MBps) [2024-11-06T08:01:54.107Z] Copying: 1005/1024 [MB] (23 MBps) [2024-11-06T08:01:54.365Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-06 08:01:54.321962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.736 [2024-11-06 08:01:54.322060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:31.736 [2024-11-06 08:01:54.322086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:31.736 [2024-11-06 08:01:54.322099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.736 [2024-11-06 08:01:54.322152] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:31.736 [2024-11-06 08:01:54.326606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.736 [2024-11-06 08:01:54.326646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:31.736 [2024-11-06 08:01:54.326664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.420 ms 00:25:31.736 [2024-11-06 08:01:54.326684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.736 [2024-11-06 08:01:54.326995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.736 [2024-11-06 08:01:54.327031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:31.736 [2024-11-06 08:01:54.327047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:25:31.736 [2024-11-06 08:01:54.327061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.736 [2024-11-06 08:01:54.330531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.736 [2024-11-06 08:01:54.330562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:31.736 [2024-11-06 08:01:54.330577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.448 ms 00:25:31.736 [2024-11-06 08:01:54.330589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.736 [2024-11-06 08:01:54.337632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.736 [2024-11-06 08:01:54.337675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:31.736 [2024-11-06 08:01:54.337690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.008 ms 00:25:31.736 [2024-11-06 08:01:54.337702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.997 [2024-11-06 08:01:54.369137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.997 [2024-11-06 08:01:54.369220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:31.997 [2024-11-06 08:01:54.369241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.327 ms 00:25:31.997 [2024-11-06 08:01:54.369266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.997 [2024-11-06 08:01:54.386429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.997 [2024-11-06 08:01:54.386517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:31.997 [2024-11-06 08:01:54.386539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.093 ms 00:25:31.997 [2024-11-06 08:01:54.386552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.997 [2024-11-06 08:01:54.386741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.997 [2024-11-06 08:01:54.386773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:31.997 [2024-11-06 08:01:54.386787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:25:31.997 [2024-11-06 08:01:54.386800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.997 [2024-11-06 08:01:54.415777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.997 [2024-11-06 08:01:54.415857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:31.997 [2024-11-06 08:01:54.415889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.951 ms 00:25:31.997 [2024-11-06 08:01:54.415901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.997 [2024-11-06 08:01:54.444164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.997 [2024-11-06 08:01:54.444275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:31.997 [2024-11-06 08:01:54.444298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.197 ms 00:25:31.997 [2024-11-06 08:01:54.444311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.997 [2024-11-06 08:01:54.472123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.997 [2024-11-06 08:01:54.472212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:31.997 [2024-11-06 08:01:54.472240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.745 ms 00:25:31.997 [2024-11-06 08:01:54.472271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.997 [2024-11-06 08:01:54.500210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.997 [2024-11-06 08:01:54.500304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:31.997 [2024-11-06 08:01:54.500339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.792 ms 00:25:31.997 [2024-11-06 08:01:54.500351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.997 [2024-11-06 08:01:54.500405] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:31.997 [2024-11-06 08:01:54.500432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.500991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.501003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.501014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.501026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.501038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:31.997 [2024-11-06 08:01:54.501060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:31.998 [2024-11-06 08:01:54.501703] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:31.998 [2024-11-06 08:01:54.501715] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b72774d1-8924-47f6-808c-25def4de7f7d 00:25:31.998 [2024-11-06 08:01:54.501734] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:31.998 [2024-11-06 08:01:54.501744] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:31.998 [2024-11-06 08:01:54.501755] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:31.998 [2024-11-06 08:01:54.501767] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:31.998 [2024-11-06 08:01:54.501778] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:31.998 [2024-11-06 08:01:54.501790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:31.998 [2024-11-06 08:01:54.501817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:31.998 [2024-11-06 08:01:54.501829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:31.998 [2024-11-06 08:01:54.501840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:31.998 [2024-11-06 08:01:54.501852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.998 [2024-11-06 08:01:54.501864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:31.998 [2024-11-06 08:01:54.501876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:25:31.998 [2024-11-06 08:01:54.501888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.998 [2024-11-06 08:01:54.518415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.998 [2024-11-06 08:01:54.518490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:31.998 [2024-11-06 08:01:54.518518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.471 ms 00:25:31.998 [2024-11-06 08:01:54.518531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.998 [2024-11-06 08:01:54.519038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.998 [2024-11-06 08:01:54.519065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:31.998 [2024-11-06 08:01:54.519080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:25:31.998 [2024-11-06 08:01:54.519106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.998 [2024-11-06 08:01:54.562206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.998 [2024-11-06 08:01:54.562316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:31.998 [2024-11-06 08:01:54.562341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.998 [2024-11-06 08:01:54.562354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.998 [2024-11-06 08:01:54.562466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.998 [2024-11-06 08:01:54.562483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:31.998 [2024-11-06 08:01:54.562495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.998 [2024-11-06 08:01:54.562515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.998 [2024-11-06 08:01:54.562655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.998 [2024-11-06 08:01:54.562687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.998 [2024-11-06 08:01:54.562700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.998 [2024-11-06 08:01:54.562712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.998 [2024-11-06 08:01:54.562736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:31.998 [2024-11-06 08:01:54.562750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.998 [2024-11-06 08:01:54.562763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:31.998 [2024-11-06 08:01:54.562775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.674210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.258 [2024-11-06 08:01:54.674324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:32.258 [2024-11-06 08:01:54.674352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.258 [2024-11-06 08:01:54.674365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.758719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.258 [2024-11-06 08:01:54.758828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.258 [2024-11-06 08:01:54.758859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.258 [2024-11-06 08:01:54.758872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.759030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.258 [2024-11-06 08:01:54.759050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.258 [2024-11-06 08:01:54.759063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.258 [2024-11-06 08:01:54.759075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.759125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.258 [2024-11-06 08:01:54.759141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.258 [2024-11-06 08:01:54.759153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.258 [2024-11-06 08:01:54.759165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.759333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.258 [2024-11-06 08:01:54.759362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.258 [2024-11-06 08:01:54.759375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.258 [2024-11-06 08:01:54.759387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.759439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.258 [2024-11-06 08:01:54.759458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:32.258 [2024-11-06 08:01:54.759471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.258 [2024-11-06 08:01:54.759483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.759534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.258 [2024-11-06 08:01:54.759564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.258 [2024-11-06 08:01:54.759577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.258 [2024-11-06 08:01:54.759588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.759649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:32.258 [2024-11-06 08:01:54.759666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.258 [2024-11-06 08:01:54.759678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:32.258 [2024-11-06 08:01:54.759691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.258 [2024-11-06 08:01:54.759871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 437.874 ms, result 0 00:25:33.194 00:25:33.194 00:25:33.452 08:01:55 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:35.375 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:35.375 08:01:57 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:35.634 [2024-11-06 08:01:58.028726] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:25:35.634 [2024-11-06 08:01:58.028935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77976 ] 00:25:35.634 [2024-11-06 08:01:58.222038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.893 [2024-11-06 08:01:58.390242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.151 [2024-11-06 08:01:58.775764] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.151 [2024-11-06 08:01:58.775882] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.411 [2024-11-06 08:01:58.941270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.941362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.411 [2024-11-06 08:01:58.941390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:36.411 [2024-11-06 08:01:58.941403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.941480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.941510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.411 [2024-11-06 08:01:58.941529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:36.411 [2024-11-06 08:01:58.941540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.941571] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.411 [2024-11-06 08:01:58.942474] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.411 [2024-11-06 08:01:58.942523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.942537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.411 [2024-11-06 08:01:58.942550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:25:36.411 [2024-11-06 08:01:58.942562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.945135] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:36.411 [2024-11-06 08:01:58.961886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.961964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:36.411 [2024-11-06 08:01:58.961993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.751 ms 00:25:36.411 [2024-11-06 08:01:58.962005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.962113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.962139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:36.411 [2024-11-06 08:01:58.962153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:36.411 [2024-11-06 08:01:58.962164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.974727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.974811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.411 [2024-11-06 08:01:58.974838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.419 ms 00:25:36.411 [2024-11-06 08:01:58.974851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.975005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.975030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.411 [2024-11-06 08:01:58.975043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:25:36.411 [2024-11-06 08:01:58.975055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.975198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.975221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.411 [2024-11-06 08:01:58.975236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:36.411 [2024-11-06 08:01:58.975276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.975321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.411 [2024-11-06 08:01:58.980732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.980772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.411 [2024-11-06 08:01:58.980787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.425 ms 00:25:36.411 [2024-11-06 08:01:58.980805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.980857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.980874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.411 [2024-11-06 08:01:58.980888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:36.411 [2024-11-06 08:01:58.980901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.980948] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:36.411 [2024-11-06 08:01:58.980984] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:36.411 [2024-11-06 08:01:58.981028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:36.411 [2024-11-06 08:01:58.981090] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:36.411 [2024-11-06 08:01:58.981222] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:36.411 [2024-11-06 08:01:58.981241] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.411 [2024-11-06 08:01:58.981274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:36.411 [2024-11-06 08:01:58.981292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.411 [2024-11-06 08:01:58.981306] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.411 [2024-11-06 08:01:58.981319] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.411 [2024-11-06 08:01:58.981331] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.411 [2024-11-06 08:01:58.981342] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:36.411 [2024-11-06 08:01:58.981354] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:36.411 [2024-11-06 08:01:58.981373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.981386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.411 [2024-11-06 08:01:58.981415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:25:36.411 [2024-11-06 08:01:58.981426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.981520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.411 [2024-11-06 08:01:58.981537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.411 [2024-11-06 08:01:58.981550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:36.411 [2024-11-06 08:01:58.981561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.411 [2024-11-06 08:01:58.981681] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.411 [2024-11-06 08:01:58.981719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.411 [2024-11-06 08:01:58.981733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.411 [2024-11-06 08:01:58.981746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.411 [2024-11-06 08:01:58.981758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.411 [2024-11-06 08:01:58.981768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.411 [2024-11-06 08:01:58.981779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.411 [2024-11-06 08:01:58.981790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.411 [2024-11-06 08:01:58.981801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.411 [2024-11-06 08:01:58.981813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.411 [2024-11-06 08:01:58.981823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.411 [2024-11-06 08:01:58.981834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.411 [2024-11-06 08:01:58.981844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.411 [2024-11-06 08:01:58.981854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.411 [2024-11-06 08:01:58.981865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:36.411 [2024-11-06 08:01:58.981891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.411 [2024-11-06 08:01:58.981904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.411 [2024-11-06 08:01:58.981917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:36.411 [2024-11-06 08:01:58.981927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.411 [2024-11-06 08:01:58.981938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.411 [2024-11-06 08:01:58.981949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.411 [2024-11-06 08:01:58.981959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.411 [2024-11-06 08:01:58.981970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.411 [2024-11-06 08:01:58.981980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.411 [2024-11-06 08:01:58.981990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.411 [2024-11-06 08:01:58.982000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.411 [2024-11-06 08:01:58.982010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.411 [2024-11-06 08:01:58.982020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.411 [2024-11-06 08:01:58.982031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.412 [2024-11-06 08:01:58.982042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:36.412 [2024-11-06 08:01:58.982051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.412 [2024-11-06 08:01:58.982062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.412 [2024-11-06 08:01:58.982072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:36.412 [2024-11-06 08:01:58.982082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.412 [2024-11-06 08:01:58.982092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.412 [2024-11-06 08:01:58.982102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:36.412 [2024-11-06 08:01:58.982112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.412 [2024-11-06 08:01:58.982123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:36.412 [2024-11-06 08:01:58.982133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:36.412 [2024-11-06 08:01:58.982143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.412 [2024-11-06 08:01:58.982153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:36.412 [2024-11-06 08:01:58.982163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:36.412 [2024-11-06 08:01:58.982173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.412 [2024-11-06 08:01:58.982183] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.412 [2024-11-06 08:01:58.982195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.412 [2024-11-06 08:01:58.982206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.412 [2024-11-06 08:01:58.982217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.412 [2024-11-06 08:01:58.982229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.412 [2024-11-06 08:01:58.982241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.412 [2024-11-06 08:01:58.982271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.412 [2024-11-06 08:01:58.982284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.412 [2024-11-06 08:01:58.982295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.412 [2024-11-06 08:01:58.982307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.412 [2024-11-06 08:01:58.982320] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.412 [2024-11-06 08:01:58.982334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.412 [2024-11-06 08:01:58.982347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.412 [2024-11-06 08:01:58.982360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:36.412 [2024-11-06 08:01:58.982371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:36.412 [2024-11-06 08:01:58.982383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:36.412 [2024-11-06 08:01:58.982394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:36.412 [2024-11-06 08:01:58.982404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:36.412 [2024-11-06 08:01:58.982415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:36.412 [2024-11-06 08:01:58.982426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:36.412 [2024-11-06 08:01:58.982437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:36.412 [2024-11-06 08:01:58.982447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:36.412 [2024-11-06 08:01:58.982458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:36.412 [2024-11-06 08:01:58.982469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:36.412 [2024-11-06 08:01:58.982479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:36.412 [2024-11-06 08:01:58.982493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:36.412 [2024-11-06 08:01:58.982504] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.412 [2024-11-06 08:01:58.982518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.412 [2024-11-06 08:01:58.982537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.412 [2024-11-06 08:01:58.982549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.412 [2024-11-06 08:01:58.982561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.412 [2024-11-06 08:01:58.982572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.412 [2024-11-06 08:01:58.982584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.412 [2024-11-06 08:01:58.982596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.412 [2024-11-06 08:01:58.982608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:25:36.412 [2024-11-06 08:01:58.982620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.412 [2024-11-06 08:01:59.026922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.412 [2024-11-06 08:01:59.027009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.412 [2024-11-06 08:01:59.027035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.229 ms 00:25:36.412 [2024-11-06 08:01:59.027048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.412 [2024-11-06 08:01:59.027200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.412 [2024-11-06 08:01:59.027225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.412 [2024-11-06 08:01:59.027239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:36.412 [2024-11-06 08:01:59.027266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.086648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.086737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.671 [2024-11-06 08:01:59.086766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.239 ms 00:25:36.671 [2024-11-06 08:01:59.086778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.086883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.086901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.671 [2024-11-06 08:01:59.086917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.671 [2024-11-06 08:01:59.086936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.087836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.087867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.671 [2024-11-06 08:01:59.087891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:25:36.671 [2024-11-06 08:01:59.087904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.088097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.088118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.671 [2024-11-06 08:01:59.088132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:25:36.671 [2024-11-06 08:01:59.088143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.108974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.109060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.671 [2024-11-06 08:01:59.109086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.790 ms 00:25:36.671 [2024-11-06 08:01:59.109105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.126064] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:36.671 [2024-11-06 08:01:59.126143] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.671 [2024-11-06 08:01:59.126172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.126186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.671 [2024-11-06 08:01:59.126203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.833 ms 00:25:36.671 [2024-11-06 08:01:59.126215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.155615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.155765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.671 [2024-11-06 08:01:59.155789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.307 ms 00:25:36.671 [2024-11-06 08:01:59.155816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.172685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.172782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.671 [2024-11-06 08:01:59.172811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.754 ms 00:25:36.671 [2024-11-06 08:01:59.172825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.188075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.188160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.671 [2024-11-06 08:01:59.188184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.165 ms 00:25:36.671 [2024-11-06 08:01:59.188196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.189305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.189341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.671 [2024-11-06 08:01:59.189358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.884 ms 00:25:36.671 [2024-11-06 08:01:59.189376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.671 [2024-11-06 08:01:59.293377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.671 [2024-11-06 08:01:59.293495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.671 [2024-11-06 08:01:59.293523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.952 ms 00:25:36.671 [2024-11-06 08:01:59.293556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.930 [2024-11-06 08:01:59.312342] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.930 [2024-11-06 08:01:59.318119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.930 [2024-11-06 08:01:59.318183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.930 [2024-11-06 08:01:59.318216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.404 ms 00:25:36.930 [2024-11-06 08:01:59.318231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.930 [2024-11-06 08:01:59.318471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.930 [2024-11-06 08:01:59.318500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.930 [2024-11-06 08:01:59.318520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:36.930 [2024-11-06 08:01:59.318536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.930 [2024-11-06 08:01:59.318704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.930 [2024-11-06 08:01:59.318740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.930 [2024-11-06 08:01:59.318758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:25:36.930 [2024-11-06 08:01:59.318774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.930 [2024-11-06 08:01:59.318820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.930 [2024-11-06 08:01:59.318840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.930 [2024-11-06 08:01:59.318857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.930 [2024-11-06 08:01:59.318872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.930 [2024-11-06 08:01:59.318934] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.930 [2024-11-06 08:01:59.318962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.930 [2024-11-06 08:01:59.318978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.930 [2024-11-06 08:01:59.318993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:36.930 [2024-11-06 08:01:59.319008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.930 [2024-11-06 08:01:59.360987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.930 [2024-11-06 08:01:59.361103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.930 [2024-11-06 08:01:59.361126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.934 ms 00:25:36.930 [2024-11-06 08:01:59.361139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.930 [2024-11-06 08:01:59.361312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.930 [2024-11-06 08:01:59.361334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.930 [2024-11-06 08:01:59.361349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:36.930 [2024-11-06 08:01:59.361361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.930 [2024-11-06 08:01:59.363426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 421.400 ms, result 0 00:25:37.934  [2024-11-06T08:02:01.496Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-06T08:02:02.430Z] Copying: 45/1024 [MB] (23 MBps) [2024-11-06T08:02:03.805Z] Copying: 67/1024 [MB] (22 MBps) [2024-11-06T08:02:04.739Z] Copying: 90/1024 [MB] (22 MBps) [2024-11-06T08:02:05.676Z] Copying: 113/1024 [MB] (23 MBps) [2024-11-06T08:02:06.633Z] Copying: 135/1024 [MB] (21 MBps) [2024-11-06T08:02:07.568Z] Copying: 157/1024 [MB] (22 MBps) [2024-11-06T08:02:08.504Z] Copying: 179/1024 [MB] (22 MBps) [2024-11-06T08:02:09.440Z] Copying: 201/1024 [MB] (22 MBps) [2024-11-06T08:02:10.376Z] Copying: 224/1024 [MB] (22 MBps) [2024-11-06T08:02:11.752Z] Copying: 246/1024 [MB] (22 MBps) [2024-11-06T08:02:12.687Z] Copying: 268/1024 [MB] (22 MBps) [2024-11-06T08:02:13.663Z] Copying: 290/1024 [MB] (21 MBps) [2024-11-06T08:02:14.598Z] Copying: 312/1024 [MB] (22 MBps) [2024-11-06T08:02:15.534Z] Copying: 335/1024 [MB] (22 MBps) [2024-11-06T08:02:16.470Z] Copying: 357/1024 [MB] (21 MBps) [2024-11-06T08:02:17.405Z] Copying: 378/1024 [MB] (21 MBps) [2024-11-06T08:02:18.779Z] Copying: 400/1024 [MB] (21 MBps) [2024-11-06T08:02:19.713Z] Copying: 422/1024 [MB] (21 MBps) [2024-11-06T08:02:20.663Z] Copying: 443/1024 [MB] (21 MBps) [2024-11-06T08:02:21.613Z] Copying: 465/1024 [MB] (21 MBps) [2024-11-06T08:02:22.547Z] Copying: 486/1024 [MB] (21 MBps) [2024-11-06T08:02:23.482Z] Copying: 508/1024 [MB] (21 MBps) [2024-11-06T08:02:24.416Z] Copying: 529/1024 [MB] (21 MBps) [2024-11-06T08:02:25.790Z] Copying: 551/1024 [MB] (21 MBps) [2024-11-06T08:02:26.724Z] Copying: 573/1024 [MB] (22 MBps) [2024-11-06T08:02:27.659Z] Copying: 594/1024 [MB] (21 MBps) [2024-11-06T08:02:28.593Z] Copying: 616/1024 [MB] (21 MBps) [2024-11-06T08:02:29.529Z] Copying: 638/1024 [MB] (22 MBps) [2024-11-06T08:02:30.462Z] Copying: 660/1024 [MB] (22 MBps) [2024-11-06T08:02:31.396Z] Copying: 681/1024 [MB] (21 MBps) [2024-11-06T08:02:32.773Z] Copying: 703/1024 [MB] (21 MBps) [2024-11-06T08:02:33.709Z] Copying: 725/1024 [MB] (21 MBps) [2024-11-06T08:02:34.645Z] Copying: 746/1024 [MB] (21 MBps) [2024-11-06T08:02:35.580Z] Copying: 769/1024 [MB] (22 MBps) [2024-11-06T08:02:36.515Z] Copying: 791/1024 [MB] (22 MBps) [2024-11-06T08:02:37.470Z] Copying: 814/1024 [MB] (22 MBps) [2024-11-06T08:02:38.406Z] Copying: 836/1024 [MB] (22 MBps) [2024-11-06T08:02:39.783Z] Copying: 857/1024 [MB] (21 MBps) [2024-11-06T08:02:40.720Z] Copying: 880/1024 [MB] (22 MBps) [2024-11-06T08:02:41.657Z] Copying: 902/1024 [MB] (22 MBps) [2024-11-06T08:02:42.594Z] Copying: 923/1024 [MB] (21 MBps) [2024-11-06T08:02:43.531Z] Copying: 945/1024 [MB] (21 MBps) [2024-11-06T08:02:44.467Z] Copying: 967/1024 [MB] (21 MBps) [2024-11-06T08:02:45.403Z] Copying: 989/1024 [MB] (22 MBps) [2024-11-06T08:02:46.781Z] Copying: 1011/1024 [MB] (21 MBps) [2024-11-06T08:02:47.040Z] Copying: 1023/1024 [MB] (12 MBps) [2024-11-06T08:02:47.040Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-06 08:02:46.982572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.411 [2024-11-06 08:02:46.982666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:24.411 [2024-11-06 08:02:46.982689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:24.411 [2024-11-06 08:02:46.982717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.411 [2024-11-06 08:02:46.984618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:24.411 [2024-11-06 08:02:46.991005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.411 [2024-11-06 08:02:46.991061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:24.411 [2024-11-06 08:02:46.991088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.306 ms 00:26:24.411 [2024-11-06 08:02:46.991100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.411 [2024-11-06 08:02:47.002190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.411 [2024-11-06 08:02:47.002272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:24.411 [2024-11-06 08:02:47.002292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.621 ms 00:26:24.411 [2024-11-06 08:02:47.002304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.411 [2024-11-06 08:02:47.023133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.411 [2024-11-06 08:02:47.023221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:24.411 [2024-11-06 08:02:47.023245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.790 ms 00:26:24.411 [2024-11-06 08:02:47.023271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.411 [2024-11-06 08:02:47.029148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.411 [2024-11-06 08:02:47.029184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:24.411 [2024-11-06 08:02:47.029199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.831 ms 00:26:24.411 [2024-11-06 08:02:47.029211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.670 [2024-11-06 08:02:47.060193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.670 [2024-11-06 08:02:47.060293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:24.670 [2024-11-06 08:02:47.060325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.861 ms 00:26:24.670 [2024-11-06 08:02:47.060338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.670 [2024-11-06 08:02:47.078579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.670 [2024-11-06 08:02:47.078684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:24.670 [2024-11-06 08:02:47.078715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.149 ms 00:26:24.670 [2024-11-06 08:02:47.078728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.670 [2024-11-06 08:02:47.188148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.670 [2024-11-06 08:02:47.188272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:24.670 [2024-11-06 08:02:47.188299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.334 ms 00:26:24.670 [2024-11-06 08:02:47.188312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.670 [2024-11-06 08:02:47.219824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.670 [2024-11-06 08:02:47.219916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:24.670 [2024-11-06 08:02:47.219944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.483 ms 00:26:24.671 [2024-11-06 08:02:47.219957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.671 [2024-11-06 08:02:47.248649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.671 [2024-11-06 08:02:47.248749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:24.671 [2024-11-06 08:02:47.248776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.610 ms 00:26:24.671 [2024-11-06 08:02:47.248788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.671 [2024-11-06 08:02:47.276902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.671 [2024-11-06 08:02:47.276988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:24.671 [2024-11-06 08:02:47.277018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.043 ms 00:26:24.671 [2024-11-06 08:02:47.277030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.930 [2024-11-06 08:02:47.305237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.930 [2024-11-06 08:02:47.305325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:24.930 [2024-11-06 08:02:47.305347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.042 ms 00:26:24.930 [2024-11-06 08:02:47.305359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.930 [2024-11-06 08:02:47.305430] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:24.930 [2024-11-06 08:02:47.305460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115712 / 261120 wr_cnt: 1 state: open 00:26:24.930 [2024-11-06 08:02:47.305476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.305997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.306009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.306021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.306033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:24.930 [2024-11-06 08:02:47.306046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:24.931 [2024-11-06 08:02:47.306751] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:24.931 [2024-11-06 08:02:47.306766] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b72774d1-8924-47f6-808c-25def4de7f7d 00:26:24.931 [2024-11-06 08:02:47.306779] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115712 00:26:24.931 [2024-11-06 08:02:47.306791] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116672 00:26:24.931 [2024-11-06 08:02:47.306802] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115712 00:26:24.931 [2024-11-06 08:02:47.306814] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:26:24.931 [2024-11-06 08:02:47.306826] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:24.931 [2024-11-06 08:02:47.306838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:24.931 [2024-11-06 08:02:47.306871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:24.931 [2024-11-06 08:02:47.306882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:24.931 [2024-11-06 08:02:47.306892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:24.931 [2024-11-06 08:02:47.306903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.931 [2024-11-06 08:02:47.306916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:24.931 [2024-11-06 08:02:47.306929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:26:24.931 [2024-11-06 08:02:47.306940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.931 [2024-11-06 08:02:47.323584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.931 [2024-11-06 08:02:47.323665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:24.931 [2024-11-06 08:02:47.323697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.586 ms 00:26:24.931 [2024-11-06 08:02:47.323724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.931 [2024-11-06 08:02:47.324273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.931 [2024-11-06 08:02:47.324300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:24.931 [2024-11-06 08:02:47.324314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:26:24.931 [2024-11-06 08:02:47.324327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.931 [2024-11-06 08:02:47.367454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.931 [2024-11-06 08:02:47.367557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:24.931 [2024-11-06 08:02:47.367588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.931 [2024-11-06 08:02:47.367600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.931 [2024-11-06 08:02:47.367707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.931 [2024-11-06 08:02:47.367725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:24.931 [2024-11-06 08:02:47.367738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.931 [2024-11-06 08:02:47.367749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.931 [2024-11-06 08:02:47.367862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.931 [2024-11-06 08:02:47.367882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:24.931 [2024-11-06 08:02:47.367908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.931 [2024-11-06 08:02:47.367926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.931 [2024-11-06 08:02:47.367961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.931 [2024-11-06 08:02:47.367975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:24.931 [2024-11-06 08:02:47.367988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.931 [2024-11-06 08:02:47.367999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.931 [2024-11-06 08:02:47.479900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.931 [2024-11-06 08:02:47.479990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:24.931 [2024-11-06 08:02:47.480031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.931 [2024-11-06 08:02:47.480043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.189 [2024-11-06 08:02:47.563764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.189 [2024-11-06 08:02:47.563864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.189 [2024-11-06 08:02:47.563893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.189 [2024-11-06 08:02:47.563906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.189 [2024-11-06 08:02:47.564058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.189 [2024-11-06 08:02:47.564079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.189 [2024-11-06 08:02:47.564092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.189 [2024-11-06 08:02:47.564105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.190 [2024-11-06 08:02:47.564169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.190 [2024-11-06 08:02:47.564185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.190 [2024-11-06 08:02:47.564209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.190 [2024-11-06 08:02:47.564220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.190 [2024-11-06 08:02:47.564401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.190 [2024-11-06 08:02:47.564424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.190 [2024-11-06 08:02:47.564436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.190 [2024-11-06 08:02:47.564448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.190 [2024-11-06 08:02:47.564505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.190 [2024-11-06 08:02:47.564523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.190 [2024-11-06 08:02:47.564537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.190 [2024-11-06 08:02:47.564549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.190 [2024-11-06 08:02:47.564600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.190 [2024-11-06 08:02:47.564617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.190 [2024-11-06 08:02:47.564629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.190 [2024-11-06 08:02:47.564641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.190 [2024-11-06 08:02:47.564707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.190 [2024-11-06 08:02:47.564725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.190 [2024-11-06 08:02:47.564737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.190 [2024-11-06 08:02:47.564749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.190 [2024-11-06 08:02:47.564941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 583.207 ms, result 0 00:26:26.565 00:26:26.565 00:26:26.565 08:02:49 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:26.824 [2024-11-06 08:02:49.270718] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:26:26.824 [2024-11-06 08:02:49.270916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78484 ] 00:26:26.824 [2024-11-06 08:02:49.443305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.101 [2024-11-06 08:02:49.572372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.381 [2024-11-06 08:02:49.926305] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.381 [2024-11-06 08:02:49.926435] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.640 [2024-11-06 08:02:50.096868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.096959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:27.641 [2024-11-06 08:02:50.097001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:27.641 [2024-11-06 08:02:50.097015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.097111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.097132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:27.641 [2024-11-06 08:02:50.097149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:27.641 [2024-11-06 08:02:50.097161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.097194] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:27.641 [2024-11-06 08:02:50.098119] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:27.641 [2024-11-06 08:02:50.098163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.098178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:27.641 [2024-11-06 08:02:50.098192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:26:27.641 [2024-11-06 08:02:50.098204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.100236] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:27.641 [2024-11-06 08:02:50.116971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.117070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:27.641 [2024-11-06 08:02:50.117108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.735 ms 00:26:27.641 [2024-11-06 08:02:50.117121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.117227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.117264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:27.641 [2024-11-06 08:02:50.117280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:27.641 [2024-11-06 08:02:50.117292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.126423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.126482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:27.641 [2024-11-06 08:02:50.126515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.013 ms 00:26:27.641 [2024-11-06 08:02:50.126536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.126649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.126669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:27.641 [2024-11-06 08:02:50.126683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:27.641 [2024-11-06 08:02:50.126694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.126785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.126804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:27.641 [2024-11-06 08:02:50.126817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:27.641 [2024-11-06 08:02:50.126828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.126879] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:27.641 [2024-11-06 08:02:50.131837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.131891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:27.641 [2024-11-06 08:02:50.131927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.981 ms 00:26:27.641 [2024-11-06 08:02:50.131939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.131981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.131996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:27.641 [2024-11-06 08:02:50.132009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:27.641 [2024-11-06 08:02:50.132020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.132098] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:27.641 [2024-11-06 08:02:50.132148] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:27.641 [2024-11-06 08:02:50.132193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:27.641 [2024-11-06 08:02:50.132219] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:27.641 [2024-11-06 08:02:50.132352] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:27.641 [2024-11-06 08:02:50.132373] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:27.641 [2024-11-06 08:02:50.132393] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:27.641 [2024-11-06 08:02:50.132409] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:27.641 [2024-11-06 08:02:50.132423] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:27.641 [2024-11-06 08:02:50.132436] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:27.641 [2024-11-06 08:02:50.132447] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:27.641 [2024-11-06 08:02:50.132459] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:27.641 [2024-11-06 08:02:50.132477] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:27.641 [2024-11-06 08:02:50.132490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.132501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:27.641 [2024-11-06 08:02:50.132514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:26:27.641 [2024-11-06 08:02:50.132525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.132625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.641 [2024-11-06 08:02:50.132641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:27.641 [2024-11-06 08:02:50.132654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:27.641 [2024-11-06 08:02:50.132666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.641 [2024-11-06 08:02:50.132791] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:27.641 [2024-11-06 08:02:50.132813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:27.641 [2024-11-06 08:02:50.132827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.641 [2024-11-06 08:02:50.132840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.641 [2024-11-06 08:02:50.132852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:27.641 [2024-11-06 08:02:50.132863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:27.641 [2024-11-06 08:02:50.132874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:27.641 [2024-11-06 08:02:50.132885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:27.641 [2024-11-06 08:02:50.132896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:27.641 [2024-11-06 08:02:50.132906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.641 [2024-11-06 08:02:50.132917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:27.641 [2024-11-06 08:02:50.132929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:27.641 [2024-11-06 08:02:50.132939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.641 [2024-11-06 08:02:50.132950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:27.641 [2024-11-06 08:02:50.132961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:27.641 [2024-11-06 08:02:50.132986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.641 [2024-11-06 08:02:50.132998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:27.641 [2024-11-06 08:02:50.133009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:27.641 [2024-11-06 08:02:50.133020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.641 [2024-11-06 08:02:50.133031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:27.641 [2024-11-06 08:02:50.133042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:27.641 [2024-11-06 08:02:50.133065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.641 [2024-11-06 08:02:50.133078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:27.641 [2024-11-06 08:02:50.133090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:27.641 [2024-11-06 08:02:50.133101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.641 [2024-11-06 08:02:50.133112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:27.641 [2024-11-06 08:02:50.133123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:27.641 [2024-11-06 08:02:50.133134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.641 [2024-11-06 08:02:50.133144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:27.641 [2024-11-06 08:02:50.133156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:27.641 [2024-11-06 08:02:50.133166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.641 [2024-11-06 08:02:50.133177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:27.641 [2024-11-06 08:02:50.133188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:27.641 [2024-11-06 08:02:50.133198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.641 [2024-11-06 08:02:50.133209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:27.641 [2024-11-06 08:02:50.133220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:27.641 [2024-11-06 08:02:50.133231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.641 [2024-11-06 08:02:50.133241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:27.641 [2024-11-06 08:02:50.133269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:27.641 [2024-11-06 08:02:50.133282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.641 [2024-11-06 08:02:50.133293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:27.642 [2024-11-06 08:02:50.133304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:27.642 [2024-11-06 08:02:50.133315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.642 [2024-11-06 08:02:50.133327] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:27.642 [2024-11-06 08:02:50.133339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:27.642 [2024-11-06 08:02:50.133351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.642 [2024-11-06 08:02:50.133363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.642 [2024-11-06 08:02:50.133376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:27.642 [2024-11-06 08:02:50.133388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:27.642 [2024-11-06 08:02:50.133399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:27.642 [2024-11-06 08:02:50.133410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:27.642 [2024-11-06 08:02:50.133420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:27.642 [2024-11-06 08:02:50.133432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:27.642 [2024-11-06 08:02:50.133446] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:27.642 [2024-11-06 08:02:50.133461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.642 [2024-11-06 08:02:50.133473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:27.642 [2024-11-06 08:02:50.133485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:27.642 [2024-11-06 08:02:50.133496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:27.642 [2024-11-06 08:02:50.133508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:27.642 [2024-11-06 08:02:50.133520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:27.642 [2024-11-06 08:02:50.133532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:27.642 [2024-11-06 08:02:50.133543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:27.642 [2024-11-06 08:02:50.133554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:27.642 [2024-11-06 08:02:50.133565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:27.642 [2024-11-06 08:02:50.133576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:27.642 [2024-11-06 08:02:50.133587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:27.642 [2024-11-06 08:02:50.133598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:27.642 [2024-11-06 08:02:50.133610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:27.642 [2024-11-06 08:02:50.133622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:27.642 [2024-11-06 08:02:50.133633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:27.642 [2024-11-06 08:02:50.133652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.642 [2024-11-06 08:02:50.133667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:27.642 [2024-11-06 08:02:50.133679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:27.642 [2024-11-06 08:02:50.133690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:27.642 [2024-11-06 08:02:50.133703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:27.642 [2024-11-06 08:02:50.133716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.133728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:27.642 [2024-11-06 08:02:50.133740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:26:27.642 [2024-11-06 08:02:50.133752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.642 [2024-11-06 08:02:50.172953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.173027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:27.642 [2024-11-06 08:02:50.173059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.132 ms 00:26:27.642 [2024-11-06 08:02:50.173074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.642 [2024-11-06 08:02:50.173203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.173220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:27.642 [2024-11-06 08:02:50.173234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:27.642 [2024-11-06 08:02:50.173246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.642 [2024-11-06 08:02:50.228603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.228675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:27.642 [2024-11-06 08:02:50.228697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.234 ms 00:26:27.642 [2024-11-06 08:02:50.228710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.642 [2024-11-06 08:02:50.228794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.228811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:27.642 [2024-11-06 08:02:50.228832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:27.642 [2024-11-06 08:02:50.228845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.642 [2024-11-06 08:02:50.229538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.229568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:27.642 [2024-11-06 08:02:50.229584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:26:27.642 [2024-11-06 08:02:50.229596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.642 [2024-11-06 08:02:50.229772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.229799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:27.642 [2024-11-06 08:02:50.229813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:26:27.642 [2024-11-06 08:02:50.229832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.642 [2024-11-06 08:02:50.248473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.248556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:27.642 [2024-11-06 08:02:50.248597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.603 ms 00:26:27.642 [2024-11-06 08:02:50.248610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.642 [2024-11-06 08:02:50.264866] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:27.642 [2024-11-06 08:02:50.264925] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:27.642 [2024-11-06 08:02:50.264946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.642 [2024-11-06 08:02:50.264961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:27.642 [2024-11-06 08:02:50.264976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.145 ms 00:26:27.642 [2024-11-06 08:02:50.264988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.293323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.293450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:27.901 [2024-11-06 08:02:50.293503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.254 ms 00:26:27.901 [2024-11-06 08:02:50.293516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.309581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.309664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:27.901 [2024-11-06 08:02:50.309699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.963 ms 00:26:27.901 [2024-11-06 08:02:50.309712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.324293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.324348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:27.901 [2024-11-06 08:02:50.324383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.526 ms 00:26:27.901 [2024-11-06 08:02:50.324394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.325425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.325479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:27.901 [2024-11-06 08:02:50.325496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:26:27.901 [2024-11-06 08:02:50.325513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.399395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.399495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:27.901 [2024-11-06 08:02:50.399544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.854 ms 00:26:27.901 [2024-11-06 08:02:50.399558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.413986] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:27.901 [2024-11-06 08:02:50.418316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.418366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:27.901 [2024-11-06 08:02:50.418403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.634 ms 00:26:27.901 [2024-11-06 08:02:50.418415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.418549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.418587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:27.901 [2024-11-06 08:02:50.418601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:27.901 [2024-11-06 08:02:50.418618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.420586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.420663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:27.901 [2024-11-06 08:02:50.420679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:26:27.901 [2024-11-06 08:02:50.420691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.420732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.420749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:27.901 [2024-11-06 08:02:50.420761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:27.901 [2024-11-06 08:02:50.420773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.420836] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:27.901 [2024-11-06 08:02:50.420854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.420882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:27.901 [2024-11-06 08:02:50.420895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:27.901 [2024-11-06 08:02:50.420906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.451036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.451119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:27.901 [2024-11-06 08:02:50.451157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.098 ms 00:26:27.901 [2024-11-06 08:02:50.451177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.451301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.901 [2024-11-06 08:02:50.451322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:27.901 [2024-11-06 08:02:50.451336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:27.901 [2024-11-06 08:02:50.451348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.901 [2024-11-06 08:02:50.455361] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 356.870 ms, result 0 00:26:29.276  [2024-11-06T08:02:52.840Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-06T08:02:53.775Z] Copying: 45/1024 [MB] (24 MBps) [2024-11-06T08:02:54.710Z] Copying: 70/1024 [MB] (25 MBps) [2024-11-06T08:02:56.094Z] Copying: 96/1024 [MB] (25 MBps) [2024-11-06T08:02:57.028Z] Copying: 121/1024 [MB] (25 MBps) [2024-11-06T08:02:57.962Z] Copying: 146/1024 [MB] (25 MBps) [2024-11-06T08:02:58.899Z] Copying: 172/1024 [MB] (25 MBps) [2024-11-06T08:02:59.835Z] Copying: 197/1024 [MB] (24 MBps) [2024-11-06T08:03:00.769Z] Copying: 222/1024 [MB] (25 MBps) [2024-11-06T08:03:01.702Z] Copying: 247/1024 [MB] (24 MBps) [2024-11-06T08:03:03.074Z] Copying: 272/1024 [MB] (24 MBps) [2024-11-06T08:03:04.008Z] Copying: 297/1024 [MB] (24 MBps) [2024-11-06T08:03:04.941Z] Copying: 322/1024 [MB] (25 MBps) [2024-11-06T08:03:05.874Z] Copying: 346/1024 [MB] (24 MBps) [2024-11-06T08:03:06.808Z] Copying: 371/1024 [MB] (25 MBps) [2024-11-06T08:03:07.742Z] Copying: 397/1024 [MB] (25 MBps) [2024-11-06T08:03:08.703Z] Copying: 422/1024 [MB] (25 MBps) [2024-11-06T08:03:10.078Z] Copying: 446/1024 [MB] (24 MBps) [2024-11-06T08:03:11.011Z] Copying: 471/1024 [MB] (24 MBps) [2024-11-06T08:03:11.945Z] Copying: 495/1024 [MB] (24 MBps) [2024-11-06T08:03:12.880Z] Copying: 520/1024 [MB] (24 MBps) [2024-11-06T08:03:13.814Z] Copying: 545/1024 [MB] (24 MBps) [2024-11-06T08:03:14.748Z] Copying: 570/1024 [MB] (24 MBps) [2024-11-06T08:03:15.684Z] Copying: 595/1024 [MB] (25 MBps) [2024-11-06T08:03:17.058Z] Copying: 621/1024 [MB] (25 MBps) [2024-11-06T08:03:17.990Z] Copying: 645/1024 [MB] (24 MBps) [2024-11-06T08:03:18.961Z] Copying: 671/1024 [MB] (25 MBps) [2024-11-06T08:03:19.896Z] Copying: 696/1024 [MB] (24 MBps) [2024-11-06T08:03:20.829Z] Copying: 720/1024 [MB] (24 MBps) [2024-11-06T08:03:21.766Z] Copying: 745/1024 [MB] (24 MBps) [2024-11-06T08:03:22.700Z] Copying: 770/1024 [MB] (25 MBps) [2024-11-06T08:03:24.076Z] Copying: 795/1024 [MB] (25 MBps) [2024-11-06T08:03:25.012Z] Copying: 819/1024 [MB] (24 MBps) [2024-11-06T08:03:25.949Z] Copying: 843/1024 [MB] (23 MBps) [2024-11-06T08:03:26.884Z] Copying: 868/1024 [MB] (24 MBps) [2024-11-06T08:03:27.818Z] Copying: 893/1024 [MB] (24 MBps) [2024-11-06T08:03:28.784Z] Copying: 918/1024 [MB] (24 MBps) [2024-11-06T08:03:29.719Z] Copying: 942/1024 [MB] (24 MBps) [2024-11-06T08:03:31.094Z] Copying: 966/1024 [MB] (24 MBps) [2024-11-06T08:03:32.029Z] Copying: 990/1024 [MB] (23 MBps) [2024-11-06T08:03:32.029Z] Copying: 1015/1024 [MB] (24 MBps) [2024-11-06T08:03:32.287Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-06 08:03:32.121981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.658 [2024-11-06 08:03:32.122057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:09.658 [2024-11-06 08:03:32.122080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:09.658 [2024-11-06 08:03:32.122105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.658 [2024-11-06 08:03:32.122139] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:09.658 [2024-11-06 08:03:32.126303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.658 [2024-11-06 08:03:32.126355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:09.658 [2024-11-06 08:03:32.126373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.138 ms 00:27:09.658 [2024-11-06 08:03:32.126386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.658 [2024-11-06 08:03:32.126651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.658 [2024-11-06 08:03:32.126680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:09.658 [2024-11-06 08:03:32.126695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:27:09.658 [2024-11-06 08:03:32.126713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.658 [2024-11-06 08:03:32.131387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.658 [2024-11-06 08:03:32.131430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:09.658 [2024-11-06 08:03:32.131448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.650 ms 00:27:09.658 [2024-11-06 08:03:32.131461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.658 [2024-11-06 08:03:32.138231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.658 [2024-11-06 08:03:32.138307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:09.658 [2024-11-06 08:03:32.138323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.726 ms 00:27:09.658 [2024-11-06 08:03:32.138335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.658 [2024-11-06 08:03:32.170905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.658 [2024-11-06 08:03:32.170986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:09.659 [2024-11-06 08:03:32.171006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.459 ms 00:27:09.659 [2024-11-06 08:03:32.171018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.659 [2024-11-06 08:03:32.188712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.659 [2024-11-06 08:03:32.188810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:09.659 [2024-11-06 08:03:32.188831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.625 ms 00:27:09.659 [2024-11-06 08:03:32.188843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.919 [2024-11-06 08:03:32.303959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.919 [2024-11-06 08:03:32.304081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:09.919 [2024-11-06 08:03:32.304123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.034 ms 00:27:09.919 [2024-11-06 08:03:32.304136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.919 [2024-11-06 08:03:32.334554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.919 [2024-11-06 08:03:32.334651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:09.919 [2024-11-06 08:03:32.334670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.392 ms 00:27:09.919 [2024-11-06 08:03:32.334683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.919 [2024-11-06 08:03:32.363492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.919 [2024-11-06 08:03:32.363574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:09.919 [2024-11-06 08:03:32.363615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.739 ms 00:27:09.919 [2024-11-06 08:03:32.363627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.919 [2024-11-06 08:03:32.393354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.919 [2024-11-06 08:03:32.393438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:09.919 [2024-11-06 08:03:32.393458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.661 ms 00:27:09.919 [2024-11-06 08:03:32.393470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.919 [2024-11-06 08:03:32.422282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.919 [2024-11-06 08:03:32.422366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:09.919 [2024-11-06 08:03:32.422386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.680 ms 00:27:09.919 [2024-11-06 08:03:32.422397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.919 [2024-11-06 08:03:32.422475] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:09.919 [2024-11-06 08:03:32.422501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:09.919 [2024-11-06 08:03:32.422516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.422992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:09.919 [2024-11-06 08:03:32.423147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:09.920 [2024-11-06 08:03:32.423787] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:09.920 [2024-11-06 08:03:32.423799] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b72774d1-8924-47f6-808c-25def4de7f7d 00:27:09.920 [2024-11-06 08:03:32.423812] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:09.920 [2024-11-06 08:03:32.423823] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16320 00:27:09.920 [2024-11-06 08:03:32.423835] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15360 00:27:09.920 [2024-11-06 08:03:32.423847] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0625 00:27:09.920 [2024-11-06 08:03:32.423859] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:09.920 [2024-11-06 08:03:32.423879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:09.920 [2024-11-06 08:03:32.423891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:09.920 [2024-11-06 08:03:32.423914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:09.920 [2024-11-06 08:03:32.423925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:09.920 [2024-11-06 08:03:32.423936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.920 [2024-11-06 08:03:32.423947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:09.920 [2024-11-06 08:03:32.423960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:27:09.920 [2024-11-06 08:03:32.423971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.920 [2024-11-06 08:03:32.440309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.920 [2024-11-06 08:03:32.440380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:09.920 [2024-11-06 08:03:32.440399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.285 ms 00:27:09.920 [2024-11-06 08:03:32.440418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.920 [2024-11-06 08:03:32.440915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.920 [2024-11-06 08:03:32.440939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:09.920 [2024-11-06 08:03:32.440953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:27:09.920 [2024-11-06 08:03:32.440965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.920 [2024-11-06 08:03:32.482058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.920 [2024-11-06 08:03:32.482157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.920 [2024-11-06 08:03:32.482177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.920 [2024-11-06 08:03:32.482190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.920 [2024-11-06 08:03:32.482317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.920 [2024-11-06 08:03:32.482335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.920 [2024-11-06 08:03:32.482348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.920 [2024-11-06 08:03:32.482360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.920 [2024-11-06 08:03:32.482454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.920 [2024-11-06 08:03:32.482474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.920 [2024-11-06 08:03:32.482495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.920 [2024-11-06 08:03:32.482507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.920 [2024-11-06 08:03:32.482530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.920 [2024-11-06 08:03:32.482544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.920 [2024-11-06 08:03:32.482557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.920 [2024-11-06 08:03:32.482568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.179 [2024-11-06 08:03:32.591068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.179 [2024-11-06 08:03:32.591149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:10.179 [2024-11-06 08:03:32.591177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.179 [2024-11-06 08:03:32.591189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.179 [2024-11-06 08:03:32.672626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.179 [2024-11-06 08:03:32.672707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:10.179 [2024-11-06 08:03:32.672727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.179 [2024-11-06 08:03:32.672739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.179 [2024-11-06 08:03:32.672858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.179 [2024-11-06 08:03:32.672877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:10.180 [2024-11-06 08:03:32.672890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.180 [2024-11-06 08:03:32.672907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.180 [2024-11-06 08:03:32.672957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.180 [2024-11-06 08:03:32.672974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:10.180 [2024-11-06 08:03:32.672986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.180 [2024-11-06 08:03:32.672997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.180 [2024-11-06 08:03:32.673143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.180 [2024-11-06 08:03:32.673166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:10.180 [2024-11-06 08:03:32.673179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.180 [2024-11-06 08:03:32.673191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.180 [2024-11-06 08:03:32.673267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.180 [2024-11-06 08:03:32.673304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:10.180 [2024-11-06 08:03:32.673318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.180 [2024-11-06 08:03:32.673329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.180 [2024-11-06 08:03:32.673376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.180 [2024-11-06 08:03:32.673392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:10.180 [2024-11-06 08:03:32.673404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.180 [2024-11-06 08:03:32.673416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.180 [2024-11-06 08:03:32.673477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:10.180 [2024-11-06 08:03:32.673495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:10.180 [2024-11-06 08:03:32.673508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:10.180 [2024-11-06 08:03:32.673519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:10.180 [2024-11-06 08:03:32.673668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 551.652 ms, result 0 00:27:11.115 00:27:11.115 00:27:11.115 08:03:33 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:13.644 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:13.644 08:03:35 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:13.644 08:03:35 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:13.644 08:03:35 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:13.644 08:03:35 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:13.644 08:03:35 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:13.644 08:03:35 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76772 00:27:13.645 08:03:35 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76772 ']' 00:27:13.645 08:03:35 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76772 00:27:13.645 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76772) - No such process 00:27:13.645 Process with pid 76772 is not found 00:27:13.645 08:03:35 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 76772 is not found' 00:27:13.645 Remove shared memory files 00:27:13.645 08:03:35 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:13.645 08:03:35 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:13.645 08:03:35 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:13.645 08:03:35 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:13.645 08:03:35 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:13.645 08:03:35 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:13.645 08:03:35 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:13.645 00:27:13.645 real 3m34.588s 00:27:13.645 user 3m18.745s 00:27:13.645 sys 0m18.004s 00:27:13.645 08:03:35 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:13.645 08:03:35 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:13.645 ************************************ 00:27:13.645 END TEST ftl_restore 00:27:13.645 ************************************ 00:27:13.645 08:03:35 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:13.645 08:03:35 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:13.645 08:03:35 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:13.645 08:03:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:13.645 ************************************ 00:27:13.645 START TEST ftl_dirty_shutdown 00:27:13.645 ************************************ 00:27:13.645 08:03:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:13.645 * Looking for test storage... 00:27:13.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.645 08:03:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:27:13.645 08:03:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:27:13.645 08:03:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:27:13.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.645 --rc genhtml_branch_coverage=1 00:27:13.645 --rc genhtml_function_coverage=1 00:27:13.645 --rc genhtml_legend=1 00:27:13.645 --rc geninfo_all_blocks=1 00:27:13.645 --rc geninfo_unexecuted_blocks=1 00:27:13.645 00:27:13.645 ' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:27:13.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.645 --rc genhtml_branch_coverage=1 00:27:13.645 --rc genhtml_function_coverage=1 00:27:13.645 --rc genhtml_legend=1 00:27:13.645 --rc geninfo_all_blocks=1 00:27:13.645 --rc geninfo_unexecuted_blocks=1 00:27:13.645 00:27:13.645 ' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:27:13.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.645 --rc genhtml_branch_coverage=1 00:27:13.645 --rc genhtml_function_coverage=1 00:27:13.645 --rc genhtml_legend=1 00:27:13.645 --rc geninfo_all_blocks=1 00:27:13.645 --rc geninfo_unexecuted_blocks=1 00:27:13.645 00:27:13.645 ' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:27:13.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.645 --rc genhtml_branch_coverage=1 00:27:13.645 --rc genhtml_function_coverage=1 00:27:13.645 --rc genhtml_legend=1 00:27:13.645 --rc geninfo_all_blocks=1 00:27:13.645 --rc geninfo_unexecuted_blocks=1 00:27:13.645 00:27:13.645 ' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:13.645 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79006 00:27:13.646 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:13.646 08:03:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79006 00:27:13.646 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 79006 ']' 00:27:13.646 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.646 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.646 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.646 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.646 08:03:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:13.646 [2024-11-06 08:03:36.207399] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:27:13.646 [2024-11-06 08:03:36.207585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79006 ] 00:27:13.904 [2024-11-06 08:03:36.399851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.163 [2024-11-06 08:03:36.556053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.098 08:03:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.098 08:03:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:15.098 08:03:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:15.098 08:03:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:15.098 08:03:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:15.098 08:03:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:15.098 08:03:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:15.098 08:03:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:15.361 08:03:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:15.361 08:03:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:15.361 08:03:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:15.361 08:03:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:15.361 08:03:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:15.361 08:03:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:15.361 08:03:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:15.361 08:03:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:15.656 08:03:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:15.656 { 00:27:15.656 "name": "nvme0n1", 00:27:15.656 "aliases": [ 00:27:15.656 "fa2be94a-8d0e-4eab-9e4b-38c596c96646" 00:27:15.656 ], 00:27:15.656 "product_name": "NVMe disk", 00:27:15.656 "block_size": 4096, 00:27:15.656 "num_blocks": 1310720, 00:27:15.656 "uuid": "fa2be94a-8d0e-4eab-9e4b-38c596c96646", 00:27:15.656 "numa_id": -1, 00:27:15.656 "assigned_rate_limits": { 00:27:15.656 "rw_ios_per_sec": 0, 00:27:15.656 "rw_mbytes_per_sec": 0, 00:27:15.656 "r_mbytes_per_sec": 0, 00:27:15.656 "w_mbytes_per_sec": 0 00:27:15.656 }, 00:27:15.656 "claimed": true, 00:27:15.656 "claim_type": "read_many_write_one", 00:27:15.657 "zoned": false, 00:27:15.657 "supported_io_types": { 00:27:15.657 "read": true, 00:27:15.657 "write": true, 00:27:15.657 "unmap": true, 00:27:15.657 "flush": true, 00:27:15.657 "reset": true, 00:27:15.657 "nvme_admin": true, 00:27:15.657 "nvme_io": true, 00:27:15.657 "nvme_io_md": false, 00:27:15.657 "write_zeroes": true, 00:27:15.657 "zcopy": false, 00:27:15.657 "get_zone_info": false, 00:27:15.657 "zone_management": false, 00:27:15.657 "zone_append": false, 00:27:15.657 "compare": true, 00:27:15.657 "compare_and_write": false, 00:27:15.657 "abort": true, 00:27:15.657 "seek_hole": false, 00:27:15.657 "seek_data": false, 00:27:15.657 "copy": true, 00:27:15.657 "nvme_iov_md": false 00:27:15.657 }, 00:27:15.657 "driver_specific": { 00:27:15.657 "nvme": [ 00:27:15.657 { 00:27:15.657 "pci_address": "0000:00:11.0", 00:27:15.657 "trid": { 00:27:15.657 "trtype": "PCIe", 00:27:15.657 "traddr": "0000:00:11.0" 00:27:15.657 }, 00:27:15.657 "ctrlr_data": { 00:27:15.657 "cntlid": 0, 00:27:15.657 "vendor_id": "0x1b36", 00:27:15.657 "model_number": "QEMU NVMe Ctrl", 00:27:15.657 "serial_number": "12341", 00:27:15.657 "firmware_revision": "8.0.0", 00:27:15.657 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:15.657 "oacs": { 00:27:15.657 "security": 0, 00:27:15.657 "format": 1, 00:27:15.657 "firmware": 0, 00:27:15.657 "ns_manage": 1 00:27:15.657 }, 00:27:15.657 "multi_ctrlr": false, 00:27:15.657 "ana_reporting": false 00:27:15.657 }, 00:27:15.657 "vs": { 00:27:15.657 "nvme_version": "1.4" 00:27:15.657 }, 00:27:15.657 "ns_data": { 00:27:15.657 "id": 1, 00:27:15.657 "can_share": false 00:27:15.657 } 00:27:15.657 } 00:27:15.657 ], 00:27:15.657 "mp_policy": "active_passive" 00:27:15.657 } 00:27:15.657 } 00:27:15.657 ]' 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:15.657 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:15.925 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=7e01ea90-f844-4e76-9e66-c690124987ee 00:27:15.925 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:15.925 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e01ea90-f844-4e76-9e66-c690124987ee 00:27:16.184 08:03:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:16.443 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=9b277449-1318-4efc-8667-56dca41a5e61 00:27:16.443 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9b277449-1318-4efc-8667-56dca41a5e61 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:16.701 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:16.960 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:16.960 { 00:27:16.960 "name": "6b957f6f-1e7f-46a7-8b5a-74f72bb190d9", 00:27:16.960 "aliases": [ 00:27:16.960 "lvs/nvme0n1p0" 00:27:16.960 ], 00:27:16.960 "product_name": "Logical Volume", 00:27:16.960 "block_size": 4096, 00:27:16.960 "num_blocks": 26476544, 00:27:16.960 "uuid": "6b957f6f-1e7f-46a7-8b5a-74f72bb190d9", 00:27:16.961 "assigned_rate_limits": { 00:27:16.961 "rw_ios_per_sec": 0, 00:27:16.961 "rw_mbytes_per_sec": 0, 00:27:16.961 "r_mbytes_per_sec": 0, 00:27:16.961 "w_mbytes_per_sec": 0 00:27:16.961 }, 00:27:16.961 "claimed": false, 00:27:16.961 "zoned": false, 00:27:16.961 "supported_io_types": { 00:27:16.961 "read": true, 00:27:16.961 "write": true, 00:27:16.961 "unmap": true, 00:27:16.961 "flush": false, 00:27:16.961 "reset": true, 00:27:16.961 "nvme_admin": false, 00:27:16.961 "nvme_io": false, 00:27:16.961 "nvme_io_md": false, 00:27:16.961 "write_zeroes": true, 00:27:16.961 "zcopy": false, 00:27:16.961 "get_zone_info": false, 00:27:16.961 "zone_management": false, 00:27:16.961 "zone_append": false, 00:27:16.961 "compare": false, 00:27:16.961 "compare_and_write": false, 00:27:16.961 "abort": false, 00:27:16.961 "seek_hole": true, 00:27:16.961 "seek_data": true, 00:27:16.961 "copy": false, 00:27:16.961 "nvme_iov_md": false 00:27:16.961 }, 00:27:16.961 "driver_specific": { 00:27:16.961 "lvol": { 00:27:16.961 "lvol_store_uuid": "9b277449-1318-4efc-8667-56dca41a5e61", 00:27:16.961 "base_bdev": "nvme0n1", 00:27:16.961 "thin_provision": true, 00:27:16.961 "num_allocated_clusters": 0, 00:27:16.961 "snapshot": false, 00:27:16.961 "clone": false, 00:27:16.961 "esnap_clone": false 00:27:16.961 } 00:27:16.961 } 00:27:16.961 } 00:27:16.961 ]' 00:27:16.961 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:16.961 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:16.961 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:17.219 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:17.219 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:17.219 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:17.219 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:17.219 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:17.219 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:17.477 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:17.477 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:17.477 08:03:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:17.477 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:17.477 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:17.477 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:17.477 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:17.477 08:03:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:17.736 { 00:27:17.736 "name": "6b957f6f-1e7f-46a7-8b5a-74f72bb190d9", 00:27:17.736 "aliases": [ 00:27:17.736 "lvs/nvme0n1p0" 00:27:17.736 ], 00:27:17.736 "product_name": "Logical Volume", 00:27:17.736 "block_size": 4096, 00:27:17.736 "num_blocks": 26476544, 00:27:17.736 "uuid": "6b957f6f-1e7f-46a7-8b5a-74f72bb190d9", 00:27:17.736 "assigned_rate_limits": { 00:27:17.736 "rw_ios_per_sec": 0, 00:27:17.736 "rw_mbytes_per_sec": 0, 00:27:17.736 "r_mbytes_per_sec": 0, 00:27:17.736 "w_mbytes_per_sec": 0 00:27:17.736 }, 00:27:17.736 "claimed": false, 00:27:17.736 "zoned": false, 00:27:17.736 "supported_io_types": { 00:27:17.736 "read": true, 00:27:17.736 "write": true, 00:27:17.736 "unmap": true, 00:27:17.736 "flush": false, 00:27:17.736 "reset": true, 00:27:17.736 "nvme_admin": false, 00:27:17.736 "nvme_io": false, 00:27:17.736 "nvme_io_md": false, 00:27:17.736 "write_zeroes": true, 00:27:17.736 "zcopy": false, 00:27:17.736 "get_zone_info": false, 00:27:17.736 "zone_management": false, 00:27:17.736 "zone_append": false, 00:27:17.736 "compare": false, 00:27:17.736 "compare_and_write": false, 00:27:17.736 "abort": false, 00:27:17.736 "seek_hole": true, 00:27:17.736 "seek_data": true, 00:27:17.736 "copy": false, 00:27:17.736 "nvme_iov_md": false 00:27:17.736 }, 00:27:17.736 "driver_specific": { 00:27:17.736 "lvol": { 00:27:17.736 "lvol_store_uuid": "9b277449-1318-4efc-8667-56dca41a5e61", 00:27:17.736 "base_bdev": "nvme0n1", 00:27:17.736 "thin_provision": true, 00:27:17.736 "num_allocated_clusters": 0, 00:27:17.736 "snapshot": false, 00:27:17.736 "clone": false, 00:27:17.736 "esnap_clone": false 00:27:17.736 } 00:27:17.736 } 00:27:17.736 } 00:27:17.736 ]' 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:17.736 08:03:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:17.995 08:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:17.995 08:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:17.995 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:17.995 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:17.995 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:17.995 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:17.995 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 00:27:18.254 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:18.254 { 00:27:18.254 "name": "6b957f6f-1e7f-46a7-8b5a-74f72bb190d9", 00:27:18.254 "aliases": [ 00:27:18.254 "lvs/nvme0n1p0" 00:27:18.254 ], 00:27:18.254 "product_name": "Logical Volume", 00:27:18.254 "block_size": 4096, 00:27:18.254 "num_blocks": 26476544, 00:27:18.254 "uuid": "6b957f6f-1e7f-46a7-8b5a-74f72bb190d9", 00:27:18.254 "assigned_rate_limits": { 00:27:18.254 "rw_ios_per_sec": 0, 00:27:18.254 "rw_mbytes_per_sec": 0, 00:27:18.254 "r_mbytes_per_sec": 0, 00:27:18.254 "w_mbytes_per_sec": 0 00:27:18.254 }, 00:27:18.254 "claimed": false, 00:27:18.254 "zoned": false, 00:27:18.254 "supported_io_types": { 00:27:18.254 "read": true, 00:27:18.254 "write": true, 00:27:18.254 "unmap": true, 00:27:18.254 "flush": false, 00:27:18.254 "reset": true, 00:27:18.254 "nvme_admin": false, 00:27:18.254 "nvme_io": false, 00:27:18.254 "nvme_io_md": false, 00:27:18.254 "write_zeroes": true, 00:27:18.254 "zcopy": false, 00:27:18.254 "get_zone_info": false, 00:27:18.254 "zone_management": false, 00:27:18.254 "zone_append": false, 00:27:18.254 "compare": false, 00:27:18.254 "compare_and_write": false, 00:27:18.254 "abort": false, 00:27:18.254 "seek_hole": true, 00:27:18.254 "seek_data": true, 00:27:18.254 "copy": false, 00:27:18.254 "nvme_iov_md": false 00:27:18.254 }, 00:27:18.254 "driver_specific": { 00:27:18.254 "lvol": { 00:27:18.254 "lvol_store_uuid": "9b277449-1318-4efc-8667-56dca41a5e61", 00:27:18.254 "base_bdev": "nvme0n1", 00:27:18.254 "thin_provision": true, 00:27:18.254 "num_allocated_clusters": 0, 00:27:18.254 "snapshot": false, 00:27:18.254 "clone": false, 00:27:18.254 "esnap_clone": false 00:27:18.254 } 00:27:18.254 } 00:27:18.254 } 00:27:18.254 ]' 00:27:18.254 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 --l2p_dram_limit 10' 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:18.513 08:03:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6b957f6f-1e7f-46a7-8b5a-74f72bb190d9 --l2p_dram_limit 10 -c nvc0n1p0 00:27:18.772 [2024-11-06 08:03:41.184614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.184709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:18.772 [2024-11-06 08:03:41.184752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.772 [2024-11-06 08:03:41.184765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.184865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.184887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.772 [2024-11-06 08:03:41.184903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:18.772 [2024-11-06 08:03:41.184915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.184949] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:18.772 [2024-11-06 08:03:41.186068] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:18.772 [2024-11-06 08:03:41.186128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.186143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.772 [2024-11-06 08:03:41.186162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:27:18.772 [2024-11-06 08:03:41.186174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.186359] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID abdcc990-59fc-4691-b84d-7ee957ef350d 00:27:18.772 [2024-11-06 08:03:41.188335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.188397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:18.772 [2024-11-06 08:03:41.188424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:18.772 [2024-11-06 08:03:41.188438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.198602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.198702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.772 [2024-11-06 08:03:41.198722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.077 ms 00:27:18.772 [2024-11-06 08:03:41.198741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.198895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.198919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.772 [2024-11-06 08:03:41.198933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:27:18.772 [2024-11-06 08:03:41.198953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.199087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.199111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:18.772 [2024-11-06 08:03:41.199126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:18.772 [2024-11-06 08:03:41.199142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.199181] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:18.772 [2024-11-06 08:03:41.204504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.204560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.772 [2024-11-06 08:03:41.204603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.331 ms 00:27:18.772 [2024-11-06 08:03:41.204626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.204679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.772 [2024-11-06 08:03:41.204697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:18.772 [2024-11-06 08:03:41.204714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:18.772 [2024-11-06 08:03:41.204726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.772 [2024-11-06 08:03:41.204780] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:18.772 [2024-11-06 08:03:41.204946] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:18.772 [2024-11-06 08:03:41.204973] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:18.772 [2024-11-06 08:03:41.204990] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:18.772 [2024-11-06 08:03:41.205007] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:18.772 [2024-11-06 08:03:41.205022] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:18.773 [2024-11-06 08:03:41.205037] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:18.773 [2024-11-06 08:03:41.205049] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:18.773 [2024-11-06 08:03:41.205076] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:18.773 [2024-11-06 08:03:41.205088] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:18.773 [2024-11-06 08:03:41.205107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.773 [2024-11-06 08:03:41.205120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:18.773 [2024-11-06 08:03:41.205135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:27:18.773 [2024-11-06 08:03:41.205161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.773 [2024-11-06 08:03:41.205282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.773 [2024-11-06 08:03:41.205303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:18.773 [2024-11-06 08:03:41.205319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:18.773 [2024-11-06 08:03:41.205331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.773 [2024-11-06 08:03:41.205449] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:18.773 [2024-11-06 08:03:41.205470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:18.773 [2024-11-06 08:03:41.205486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.773 [2024-11-06 08:03:41.205498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:18.773 [2024-11-06 08:03:41.205525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:18.773 [2024-11-06 08:03:41.205550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:18.773 [2024-11-06 08:03:41.205563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.773 [2024-11-06 08:03:41.205587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:18.773 [2024-11-06 08:03:41.205598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:18.773 [2024-11-06 08:03:41.205611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:18.773 [2024-11-06 08:03:41.205622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:18.773 [2024-11-06 08:03:41.205635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:18.773 [2024-11-06 08:03:41.205646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:18.773 [2024-11-06 08:03:41.205673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:18.773 [2024-11-06 08:03:41.205685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:18.773 [2024-11-06 08:03:41.205711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.773 [2024-11-06 08:03:41.205738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:18.773 [2024-11-06 08:03:41.205749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.773 [2024-11-06 08:03:41.205773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:18.773 [2024-11-06 08:03:41.205786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.773 [2024-11-06 08:03:41.205812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:18.773 [2024-11-06 08:03:41.205823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:18.773 [2024-11-06 08:03:41.205847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:18.773 [2024-11-06 08:03:41.205863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.773 [2024-11-06 08:03:41.205887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:18.773 [2024-11-06 08:03:41.205898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:18.773 [2024-11-06 08:03:41.205911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:18.773 [2024-11-06 08:03:41.205922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:18.773 [2024-11-06 08:03:41.205935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:18.773 [2024-11-06 08:03:41.205946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:18.773 [2024-11-06 08:03:41.205970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:18.773 [2024-11-06 08:03:41.205983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.773 [2024-11-06 08:03:41.205994] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:18.773 [2024-11-06 08:03:41.206008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:18.773 [2024-11-06 08:03:41.206021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:18.773 [2024-11-06 08:03:41.206035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:18.773 [2024-11-06 08:03:41.206047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:18.773 [2024-11-06 08:03:41.206065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:18.773 [2024-11-06 08:03:41.206076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:18.773 [2024-11-06 08:03:41.206090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:18.773 [2024-11-06 08:03:41.206101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:18.773 [2024-11-06 08:03:41.206114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:18.773 [2024-11-06 08:03:41.206132] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:18.773 [2024-11-06 08:03:41.206151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.773 [2024-11-06 08:03:41.206165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:18.773 [2024-11-06 08:03:41.206180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:18.773 [2024-11-06 08:03:41.206192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:18.773 [2024-11-06 08:03:41.206206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:18.773 [2024-11-06 08:03:41.206217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:18.773 [2024-11-06 08:03:41.206231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:18.773 [2024-11-06 08:03:41.206243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:18.773 [2024-11-06 08:03:41.206273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:18.773 [2024-11-06 08:03:41.206286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:18.773 [2024-11-06 08:03:41.206303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:18.773 [2024-11-06 08:03:41.206315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:18.773 [2024-11-06 08:03:41.206329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:18.773 [2024-11-06 08:03:41.206341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:18.773 [2024-11-06 08:03:41.206356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:18.773 [2024-11-06 08:03:41.206367] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:18.773 [2024-11-06 08:03:41.206389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:18.773 [2024-11-06 08:03:41.206402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:18.773 [2024-11-06 08:03:41.206417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:18.773 [2024-11-06 08:03:41.206429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:18.773 [2024-11-06 08:03:41.206443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:18.773 [2024-11-06 08:03:41.206456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.773 [2024-11-06 08:03:41.206472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:18.773 [2024-11-06 08:03:41.206485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:27:18.773 [2024-11-06 08:03:41.206499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.773 [2024-11-06 08:03:41.206560] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:18.773 [2024-11-06 08:03:41.206590] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:21.305 [2024-11-06 08:03:43.894053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.305 [2024-11-06 08:03:43.894150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:21.305 [2024-11-06 08:03:43.894188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2687.503 ms 00:27:21.305 [2024-11-06 08:03:43.894203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.305 [2024-11-06 08:03:43.930153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.305 [2024-11-06 08:03:43.930272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:21.305 [2024-11-06 08:03:43.930311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.507 ms 00:27:21.305 [2024-11-06 08:03:43.930327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.305 [2024-11-06 08:03:43.930530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.305 [2024-11-06 08:03:43.930556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:21.305 [2024-11-06 08:03:43.930571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:21.305 [2024-11-06 08:03:43.930605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:43.971404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:43.971506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:21.565 [2024-11-06 08:03:43.971527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.680 ms 00:27:21.565 [2024-11-06 08:03:43.971541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:43.971606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:43.971633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:21.565 [2024-11-06 08:03:43.971647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:21.565 [2024-11-06 08:03:43.971664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:43.972379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:43.972436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:21.565 [2024-11-06 08:03:43.972452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:27:21.565 [2024-11-06 08:03:43.972466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:43.972612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:43.972633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:21.565 [2024-11-06 08:03:43.972662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:27:21.565 [2024-11-06 08:03:43.972679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:43.992110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:43.992202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:21.565 [2024-11-06 08:03:43.992238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.400 ms 00:27:21.565 [2024-11-06 08:03:43.992273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:44.013968] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:21.565 [2024-11-06 08:03:44.018514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:44.018581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:21.565 [2024-11-06 08:03:44.018604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.089 ms 00:27:21.565 [2024-11-06 08:03:44.018633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:44.087553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:44.087654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:21.565 [2024-11-06 08:03:44.087725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.804 ms 00:27:21.565 [2024-11-06 08:03:44.087738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:44.087988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:44.088027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:21.565 [2024-11-06 08:03:44.088047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:27:21.565 [2024-11-06 08:03:44.088062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:44.115486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:44.115546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:21.565 [2024-11-06 08:03:44.115584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.349 ms 00:27:21.565 [2024-11-06 08:03:44.115597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:44.143177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:44.143239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:21.565 [2024-11-06 08:03:44.143311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.527 ms 00:27:21.565 [2024-11-06 08:03:44.143325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.565 [2024-11-06 08:03:44.144261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.565 [2024-11-06 08:03:44.144335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:21.565 [2024-11-06 08:03:44.144371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:27:21.565 [2024-11-06 08:03:44.144383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.825 [2024-11-06 08:03:44.223258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.825 [2024-11-06 08:03:44.223335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:21.825 [2024-11-06 08:03:44.223380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.794 ms 00:27:21.825 [2024-11-06 08:03:44.223393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.825 [2024-11-06 08:03:44.251941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.825 [2024-11-06 08:03:44.252000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:21.825 [2024-11-06 08:03:44.252042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.439 ms 00:27:21.825 [2024-11-06 08:03:44.252054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.825 [2024-11-06 08:03:44.279828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.825 [2024-11-06 08:03:44.279888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:21.825 [2024-11-06 08:03:44.279925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.721 ms 00:27:21.825 [2024-11-06 08:03:44.279937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.825 [2024-11-06 08:03:44.307031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.825 [2024-11-06 08:03:44.307105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:21.825 [2024-11-06 08:03:44.307146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.041 ms 00:27:21.825 [2024-11-06 08:03:44.307158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.825 [2024-11-06 08:03:44.307216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.825 [2024-11-06 08:03:44.307234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:21.825 [2024-11-06 08:03:44.307404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:21.825 [2024-11-06 08:03:44.307436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.825 [2024-11-06 08:03:44.307576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.825 [2024-11-06 08:03:44.307597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:21.825 [2024-11-06 08:03:44.307614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:21.825 [2024-11-06 08:03:44.307629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.825 [2024-11-06 08:03:44.308942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3123.803 ms, result 0 00:27:21.825 { 00:27:21.825 "name": "ftl0", 00:27:21.825 "uuid": "abdcc990-59fc-4691-b84d-7ee957ef350d" 00:27:21.825 } 00:27:21.825 08:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:21.825 08:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:22.084 08:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:22.084 08:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:22.084 08:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:22.343 /dev/nbd0 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:22.343 1+0 records in 00:27:22.343 1+0 records out 00:27:22.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372721 s, 11.0 MB/s 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:27:22.343 08:03:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:22.604 [2024-11-06 08:03:45.013845] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:27:22.604 [2024-11-06 08:03:45.014049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79154 ] 00:27:22.604 [2024-11-06 08:03:45.190965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.863 [2024-11-06 08:03:45.314714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.254  [2024-11-06T08:03:47.831Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-06T08:03:48.768Z] Copying: 376/1024 [MB] (188 MBps) [2024-11-06T08:03:49.706Z] Copying: 564/1024 [MB] (188 MBps) [2024-11-06T08:03:50.643Z] Copying: 747/1024 [MB] (182 MBps) [2024-11-06T08:03:51.582Z] Copying: 920/1024 [MB] (173 MBps) [2024-11-06T08:03:52.521Z] Copying: 1024/1024 [MB] (average 182 MBps) 00:27:29.892 00:27:29.892 08:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:31.799 08:03:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:31.799 [2024-11-06 08:03:54.354724] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:27:31.799 [2024-11-06 08:03:54.354916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79247 ] 00:27:32.058 [2024-11-06 08:03:54.539549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.317 [2024-11-06 08:03:54.695748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.749  [2024-11-06T08:03:57.316Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-06T08:03:58.255Z] Copying: 29/1024 [MB] (14 MBps) [2024-11-06T08:03:59.194Z] Copying: 44/1024 [MB] (14 MBps) [2024-11-06T08:04:00.135Z] Copying: 58/1024 [MB] (14 MBps) [2024-11-06T08:04:01.107Z] Copying: 73/1024 [MB] (14 MBps) [2024-11-06T08:04:02.045Z] Copying: 88/1024 [MB] (14 MBps) [2024-11-06T08:04:03.423Z] Copying: 102/1024 [MB] (14 MBps) [2024-11-06T08:04:04.360Z] Copying: 117/1024 [MB] (14 MBps) [2024-11-06T08:04:05.311Z] Copying: 131/1024 [MB] (14 MBps) [2024-11-06T08:04:06.266Z] Copying: 146/1024 [MB] (14 MBps) [2024-11-06T08:04:07.202Z] Copying: 159/1024 [MB] (12 MBps) [2024-11-06T08:04:08.140Z] Copying: 171/1024 [MB] (12 MBps) [2024-11-06T08:04:09.077Z] Copying: 184/1024 [MB] (13 MBps) [2024-11-06T08:04:10.013Z] Copying: 199/1024 [MB] (14 MBps) [2024-11-06T08:04:11.390Z] Copying: 214/1024 [MB] (14 MBps) [2024-11-06T08:04:12.326Z] Copying: 229/1024 [MB] (14 MBps) [2024-11-06T08:04:13.260Z] Copying: 244/1024 [MB] (14 MBps) [2024-11-06T08:04:14.219Z] Copying: 258/1024 [MB] (14 MBps) [2024-11-06T08:04:15.154Z] Copying: 273/1024 [MB] (14 MBps) [2024-11-06T08:04:16.089Z] Copying: 288/1024 [MB] (14 MBps) [2024-11-06T08:04:17.023Z] Copying: 302/1024 [MB] (14 MBps) [2024-11-06T08:04:18.398Z] Copying: 317/1024 [MB] (14 MBps) [2024-11-06T08:04:19.333Z] Copying: 331/1024 [MB] (14 MBps) [2024-11-06T08:04:20.266Z] Copying: 346/1024 [MB] (14 MBps) [2024-11-06T08:04:21.201Z] Copying: 361/1024 [MB] (14 MBps) [2024-11-06T08:04:22.137Z] Copying: 376/1024 [MB] (14 MBps) [2024-11-06T08:04:23.072Z] Copying: 390/1024 [MB] (14 MBps) [2024-11-06T08:04:24.007Z] Copying: 405/1024 [MB] (14 MBps) [2024-11-06T08:04:25.383Z] Copying: 419/1024 [MB] (14 MBps) [2024-11-06T08:04:26.320Z] Copying: 434/1024 [MB] (14 MBps) [2024-11-06T08:04:27.258Z] Copying: 448/1024 [MB] (14 MBps) [2024-11-06T08:04:28.197Z] Copying: 463/1024 [MB] (14 MBps) [2024-11-06T08:04:29.136Z] Copying: 477/1024 [MB] (14 MBps) [2024-11-06T08:04:30.104Z] Copying: 492/1024 [MB] (14 MBps) [2024-11-06T08:04:31.042Z] Copying: 507/1024 [MB] (14 MBps) [2024-11-06T08:04:32.419Z] Copying: 521/1024 [MB] (14 MBps) [2024-11-06T08:04:33.355Z] Copying: 536/1024 [MB] (14 MBps) [2024-11-06T08:04:34.293Z] Copying: 551/1024 [MB] (14 MBps) [2024-11-06T08:04:35.230Z] Copying: 565/1024 [MB] (14 MBps) [2024-11-06T08:04:36.166Z] Copying: 580/1024 [MB] (14 MBps) [2024-11-06T08:04:37.101Z] Copying: 594/1024 [MB] (14 MBps) [2024-11-06T08:04:38.037Z] Copying: 610/1024 [MB] (15 MBps) [2024-11-06T08:04:39.414Z] Copying: 625/1024 [MB] (15 MBps) [2024-11-06T08:04:40.349Z] Copying: 641/1024 [MB] (16 MBps) [2024-11-06T08:04:41.287Z] Copying: 657/1024 [MB] (16 MBps) [2024-11-06T08:04:42.224Z] Copying: 674/1024 [MB] (17 MBps) [2024-11-06T08:04:43.160Z] Copying: 692/1024 [MB] (17 MBps) [2024-11-06T08:04:44.096Z] Copying: 709/1024 [MB] (17 MBps) [2024-11-06T08:04:45.033Z] Copying: 727/1024 [MB] (17 MBps) [2024-11-06T08:04:46.410Z] Copying: 744/1024 [MB] (17 MBps) [2024-11-06T08:04:47.346Z] Copying: 762/1024 [MB] (17 MBps) [2024-11-06T08:04:48.284Z] Copying: 779/1024 [MB] (17 MBps) [2024-11-06T08:04:49.220Z] Copying: 796/1024 [MB] (17 MBps) [2024-11-06T08:04:50.155Z] Copying: 814/1024 [MB] (17 MBps) [2024-11-06T08:04:51.091Z] Copying: 831/1024 [MB] (17 MBps) [2024-11-06T08:04:52.027Z] Copying: 848/1024 [MB] (17 MBps) [2024-11-06T08:04:53.402Z] Copying: 866/1024 [MB] (17 MBps) [2024-11-06T08:04:54.339Z] Copying: 884/1024 [MB] (17 MBps) [2024-11-06T08:04:55.276Z] Copying: 901/1024 [MB] (17 MBps) [2024-11-06T08:04:56.213Z] Copying: 919/1024 [MB] (17 MBps) [2024-11-06T08:04:57.149Z] Copying: 936/1024 [MB] (17 MBps) [2024-11-06T08:04:58.094Z] Copying: 954/1024 [MB] (17 MBps) [2024-11-06T08:04:59.030Z] Copying: 972/1024 [MB] (17 MBps) [2024-11-06T08:05:00.408Z] Copying: 989/1024 [MB] (17 MBps) [2024-11-06T08:05:00.976Z] Copying: 1006/1024 [MB] (17 MBps) [2024-11-06T08:05:02.352Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:28:39.723 00:28:39.723 08:05:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:39.723 08:05:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:39.723 08:05:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:39.983 [2024-11-06 08:05:02.427891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.427947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:39.983 [2024-11-06 08:05:02.427968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:39.983 [2024-11-06 08:05:02.427982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.983 [2024-11-06 08:05:02.428014] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:39.983 [2024-11-06 08:05:02.431421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.431452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:39.983 [2024-11-06 08:05:02.431468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:28:39.983 [2024-11-06 08:05:02.431479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.983 [2024-11-06 08:05:02.433529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.433565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:39.983 [2024-11-06 08:05:02.433582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.011 ms 00:28:39.983 [2024-11-06 08:05:02.433593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.983 [2024-11-06 08:05:02.448944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.448980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:39.983 [2024-11-06 08:05:02.449003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.324 ms 00:28:39.983 [2024-11-06 08:05:02.449017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.983 [2024-11-06 08:05:02.454094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.454123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:39.983 [2024-11-06 08:05:02.454139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.035 ms 00:28:39.983 [2024-11-06 08:05:02.454149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.983 [2024-11-06 08:05:02.479611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.479646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:39.983 [2024-11-06 08:05:02.479663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.377 ms 00:28:39.983 [2024-11-06 08:05:02.479674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.983 [2024-11-06 08:05:02.496225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.496274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:39.983 [2024-11-06 08:05:02.496294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.503 ms 00:28:39.983 [2024-11-06 08:05:02.496305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.983 [2024-11-06 08:05:02.496461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.496480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:39.983 [2024-11-06 08:05:02.496496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:28:39.983 [2024-11-06 08:05:02.496507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.983 [2024-11-06 08:05:02.521086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.983 [2024-11-06 08:05:02.521119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:39.983 [2024-11-06 08:05:02.521137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.553 ms 00:28:39.984 [2024-11-06 08:05:02.521152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.984 [2024-11-06 08:05:02.545113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.984 [2024-11-06 08:05:02.545148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:39.984 [2024-11-06 08:05:02.545165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.916 ms 00:28:39.984 [2024-11-06 08:05:02.545175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.984 [2024-11-06 08:05:02.568730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.984 [2024-11-06 08:05:02.568765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:39.984 [2024-11-06 08:05:02.568782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.507 ms 00:28:39.984 [2024-11-06 08:05:02.568792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.984 [2024-11-06 08:05:02.592403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.984 [2024-11-06 08:05:02.592436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:39.984 [2024-11-06 08:05:02.592453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.524 ms 00:28:39.984 [2024-11-06 08:05:02.592463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.984 [2024-11-06 08:05:02.592507] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:39.984 [2024-11-06 08:05:02.592528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.592986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:39.984 [2024-11-06 08:05:02.593489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:39.985 [2024-11-06 08:05:02.593766] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:39.985 [2024-11-06 08:05:02.593778] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: abdcc990-59fc-4691-b84d-7ee957ef350d 00:28:39.985 [2024-11-06 08:05:02.593789] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:39.985 [2024-11-06 08:05:02.593803] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:39.985 [2024-11-06 08:05:02.593813] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:39.985 [2024-11-06 08:05:02.593826] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:39.985 [2024-11-06 08:05:02.593836] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:39.985 [2024-11-06 08:05:02.593852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:39.985 [2024-11-06 08:05:02.593861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:39.985 [2024-11-06 08:05:02.593872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:39.985 [2024-11-06 08:05:02.593881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:39.985 [2024-11-06 08:05:02.593894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.985 [2024-11-06 08:05:02.593903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:39.985 [2024-11-06 08:05:02.593917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:28:39.985 [2024-11-06 08:05:02.593927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.985 [2024-11-06 08:05:02.608183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.985 [2024-11-06 08:05:02.608215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:39.985 [2024-11-06 08:05:02.608233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.211 ms 00:28:39.985 [2024-11-06 08:05:02.608270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.985 [2024-11-06 08:05:02.608744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.985 [2024-11-06 08:05:02.608772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:39.985 [2024-11-06 08:05:02.608787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:28:39.985 [2024-11-06 08:05:02.608798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.656579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.656618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:40.244 [2024-11-06 08:05:02.656638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.656652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.656723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.656738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:40.244 [2024-11-06 08:05:02.656751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.656762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.656852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.656870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:40.244 [2024-11-06 08:05:02.656885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.656898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.656929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.656942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:40.244 [2024-11-06 08:05:02.656956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.656966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.745508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.745573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:40.244 [2024-11-06 08:05:02.745597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.745609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.817501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.817563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:40.244 [2024-11-06 08:05:02.817583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.817594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.817740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.817759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:40.244 [2024-11-06 08:05:02.817774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.817786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.817870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.817886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:40.244 [2024-11-06 08:05:02.817900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.817911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.818034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.818059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:40.244 [2024-11-06 08:05:02.818075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.818085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.818143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.818163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:40.244 [2024-11-06 08:05:02.818176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.818187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.818243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.818277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:40.244 [2024-11-06 08:05:02.818293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.818304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.818376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:40.244 [2024-11-06 08:05:02.818392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:40.244 [2024-11-06 08:05:02.818405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:40.244 [2024-11-06 08:05:02.818416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.244 [2024-11-06 08:05:02.818589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.652 ms, result 0 00:28:40.244 true 00:28:40.244 08:05:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79006 00:28:40.244 08:05:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79006 00:28:40.244 08:05:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:40.503 [2024-11-06 08:05:02.929915] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:28:40.503 [2024-11-06 08:05:02.930052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79928 ] 00:28:40.503 [2024-11-06 08:05:03.093487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.762 [2024-11-06 08:05:03.205035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.138  [2024-11-06T08:05:05.703Z] Copying: 211/1024 [MB] (211 MBps) [2024-11-06T08:05:06.639Z] Copying: 417/1024 [MB] (206 MBps) [2024-11-06T08:05:07.575Z] Copying: 627/1024 [MB] (209 MBps) [2024-11-06T08:05:08.510Z] Copying: 837/1024 [MB] (210 MBps) [2024-11-06T08:05:09.446Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:28:46.817 00:28:46.817 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79006 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:46.817 08:05:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:47.080 [2024-11-06 08:05:09.475245] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:28:47.080 [2024-11-06 08:05:09.475443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79993 ] 00:28:47.080 [2024-11-06 08:05:09.649724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.344 [2024-11-06 08:05:09.760439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.606 [2024-11-06 08:05:10.101388] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.606 [2024-11-06 08:05:10.101468] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.606 [2024-11-06 08:05:10.167223] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:47.607 [2024-11-06 08:05:10.167580] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:47.607 [2024-11-06 08:05:10.167753] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:47.867 [2024-11-06 08:05:10.429448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.429498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:47.867 [2024-11-06 08:05:10.429516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:47.867 [2024-11-06 08:05:10.429527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.429605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.429621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:47.867 [2024-11-06 08:05:10.429633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:47.867 [2024-11-06 08:05:10.429642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.429669] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:47.867 [2024-11-06 08:05:10.430401] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:47.867 [2024-11-06 08:05:10.430430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.430443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:47.867 [2024-11-06 08:05:10.430454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:28:47.867 [2024-11-06 08:05:10.430465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.432859] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:47.867 [2024-11-06 08:05:10.446933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.446974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:47.867 [2024-11-06 08:05:10.446989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.076 ms 00:28:47.867 [2024-11-06 08:05:10.446999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.447078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.447101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:47.867 [2024-11-06 08:05:10.447113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:47.867 [2024-11-06 08:05:10.447122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.458532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.458572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:47.867 [2024-11-06 08:05:10.458585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.341 ms 00:28:47.867 [2024-11-06 08:05:10.458595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.458689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.458707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:47.867 [2024-11-06 08:05:10.458719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:47.867 [2024-11-06 08:05:10.458729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.458799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.458821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:47.867 [2024-11-06 08:05:10.458833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:47.867 [2024-11-06 08:05:10.458844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.458886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:47.867 [2024-11-06 08:05:10.463572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.463603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:47.867 [2024-11-06 08:05:10.463615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:28:47.867 [2024-11-06 08:05:10.463625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.463659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.463673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:47.867 [2024-11-06 08:05:10.463684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:47.867 [2024-11-06 08:05:10.463694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.463736] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:47.867 [2024-11-06 08:05:10.463771] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:47.867 [2024-11-06 08:05:10.463816] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:47.867 [2024-11-06 08:05:10.463837] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:47.867 [2024-11-06 08:05:10.463940] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:47.867 [2024-11-06 08:05:10.463955] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:47.867 [2024-11-06 08:05:10.463968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:47.867 [2024-11-06 08:05:10.463982] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:47.867 [2024-11-06 08:05:10.464000] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:47.867 [2024-11-06 08:05:10.464011] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:47.867 [2024-11-06 08:05:10.464021] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:47.867 [2024-11-06 08:05:10.464031] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:47.867 [2024-11-06 08:05:10.464041] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:47.867 [2024-11-06 08:05:10.464052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.464062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:47.867 [2024-11-06 08:05:10.464073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:28:47.867 [2024-11-06 08:05:10.464083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.464158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.867 [2024-11-06 08:05:10.464179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:47.867 [2024-11-06 08:05:10.464196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:47.867 [2024-11-06 08:05:10.464205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.867 [2024-11-06 08:05:10.464326] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:47.867 [2024-11-06 08:05:10.464346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:47.867 [2024-11-06 08:05:10.464358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.867 [2024-11-06 08:05:10.464368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.867 [2024-11-06 08:05:10.464379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:47.867 [2024-11-06 08:05:10.464388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:47.867 [2024-11-06 08:05:10.464398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:47.867 [2024-11-06 08:05:10.464407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:47.867 [2024-11-06 08:05:10.464417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:47.867 [2024-11-06 08:05:10.464426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.867 [2024-11-06 08:05:10.464435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:47.867 [2024-11-06 08:05:10.464457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:47.867 [2024-11-06 08:05:10.464468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.867 [2024-11-06 08:05:10.464480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:47.868 [2024-11-06 08:05:10.464490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:47.868 [2024-11-06 08:05:10.464499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:47.868 [2024-11-06 08:05:10.464517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:47.868 [2024-11-06 08:05:10.464526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:47.868 [2024-11-06 08:05:10.464545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.868 [2024-11-06 08:05:10.464562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:47.868 [2024-11-06 08:05:10.464571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.868 [2024-11-06 08:05:10.464588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:47.868 [2024-11-06 08:05:10.464596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.868 [2024-11-06 08:05:10.464613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:47.868 [2024-11-06 08:05:10.464623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.868 [2024-11-06 08:05:10.464640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:47.868 [2024-11-06 08:05:10.464649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.868 [2024-11-06 08:05:10.464666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:47.868 [2024-11-06 08:05:10.464675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:47.868 [2024-11-06 08:05:10.464684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.868 [2024-11-06 08:05:10.464694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:47.868 [2024-11-06 08:05:10.464703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:47.868 [2024-11-06 08:05:10.464712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:47.868 [2024-11-06 08:05:10.464730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:47.868 [2024-11-06 08:05:10.464739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464747] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:47.868 [2024-11-06 08:05:10.464758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:47.868 [2024-11-06 08:05:10.464769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.868 [2024-11-06 08:05:10.464784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.868 [2024-11-06 08:05:10.464794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:47.868 [2024-11-06 08:05:10.464804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:47.868 [2024-11-06 08:05:10.464813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:47.868 [2024-11-06 08:05:10.464822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:47.868 [2024-11-06 08:05:10.464830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:47.868 [2024-11-06 08:05:10.464839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:47.868 [2024-11-06 08:05:10.464849] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:47.868 [2024-11-06 08:05:10.464862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.868 [2024-11-06 08:05:10.464872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:47.868 [2024-11-06 08:05:10.464882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:47.868 [2024-11-06 08:05:10.464892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:47.868 [2024-11-06 08:05:10.464901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:47.868 [2024-11-06 08:05:10.464913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:47.868 [2024-11-06 08:05:10.464922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:47.868 [2024-11-06 08:05:10.464931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:47.868 [2024-11-06 08:05:10.464941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:47.868 [2024-11-06 08:05:10.464951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:47.868 [2024-11-06 08:05:10.464961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:47.868 [2024-11-06 08:05:10.464970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:47.868 [2024-11-06 08:05:10.464980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:47.868 [2024-11-06 08:05:10.464989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:47.868 [2024-11-06 08:05:10.465000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:47.868 [2024-11-06 08:05:10.465010] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:47.868 [2024-11-06 08:05:10.465021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.868 [2024-11-06 08:05:10.465032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:47.868 [2024-11-06 08:05:10.465042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:47.868 [2024-11-06 08:05:10.465052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:47.868 [2024-11-06 08:05:10.465062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:47.868 [2024-11-06 08:05:10.465081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.868 [2024-11-06 08:05:10.465093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:47.868 [2024-11-06 08:05:10.465106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:28:47.868 [2024-11-06 08:05:10.465116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.128 [2024-11-06 08:05:10.506289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.128 [2024-11-06 08:05:10.506362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:48.128 [2024-11-06 08:05:10.506390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.113 ms 00:28:48.128 [2024-11-06 08:05:10.506401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.128 [2024-11-06 08:05:10.506515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.128 [2024-11-06 08:05:10.506536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:48.128 [2024-11-06 08:05:10.506548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:48.128 [2024-11-06 08:05:10.506558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.128 [2024-11-06 08:05:10.563378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.128 [2024-11-06 08:05:10.563445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:48.128 [2024-11-06 08:05:10.563469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.729 ms 00:28:48.128 [2024-11-06 08:05:10.563488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.128 [2024-11-06 08:05:10.563540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.128 [2024-11-06 08:05:10.563556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:48.128 [2024-11-06 08:05:10.563569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:48.128 [2024-11-06 08:05:10.563580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.128 [2024-11-06 08:05:10.564435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.128 [2024-11-06 08:05:10.564464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:48.128 [2024-11-06 08:05:10.564477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:28:48.128 [2024-11-06 08:05:10.564488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.128 [2024-11-06 08:05:10.564662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.128 [2024-11-06 08:05:10.564680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:48.128 [2024-11-06 08:05:10.564692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:28:48.128 [2024-11-06 08:05:10.564705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.128 [2024-11-06 08:05:10.583135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.128 [2024-11-06 08:05:10.583180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:48.129 [2024-11-06 08:05:10.583195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.405 ms 00:28:48.129 [2024-11-06 08:05:10.583205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.597729] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:48.129 [2024-11-06 08:05:10.597767] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:48.129 [2024-11-06 08:05:10.597783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.597797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:48.129 [2024-11-06 08:05:10.597809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.445 ms 00:28:48.129 [2024-11-06 08:05:10.597819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.621424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.621471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:48.129 [2024-11-06 08:05:10.621498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.562 ms 00:28:48.129 [2024-11-06 08:05:10.621518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.633743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.633777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:48.129 [2024-11-06 08:05:10.633799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.183 ms 00:28:48.129 [2024-11-06 08:05:10.633809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.645713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.645757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.129 [2024-11-06 08:05:10.645770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.865 ms 00:28:48.129 [2024-11-06 08:05:10.645780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.646443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.646474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.129 [2024-11-06 08:05:10.646487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:28:48.129 [2024-11-06 08:05:10.646498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.720607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.720697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.129 [2024-11-06 08:05:10.720717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.084 ms 00:28:48.129 [2024-11-06 08:05:10.720729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.730664] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.129 [2024-11-06 08:05:10.733433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.733473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.129 [2024-11-06 08:05:10.733487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.623 ms 00:28:48.129 [2024-11-06 08:05:10.733498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.733609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.733628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.129 [2024-11-06 08:05:10.733641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:48.129 [2024-11-06 08:05:10.733651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.733786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.733804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.129 [2024-11-06 08:05:10.733816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:28:48.129 [2024-11-06 08:05:10.733827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.733860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.733879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.129 [2024-11-06 08:05:10.733891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:48.129 [2024-11-06 08:05:10.733910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.129 [2024-11-06 08:05:10.733957] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.129 [2024-11-06 08:05:10.733984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.129 [2024-11-06 08:05:10.733996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.129 [2024-11-06 08:05:10.734008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:48.129 [2024-11-06 08:05:10.734019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.388 [2024-11-06 08:05:10.760045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.388 [2024-11-06 08:05:10.760090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.388 [2024-11-06 08:05:10.760105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.994 ms 00:28:48.388 [2024-11-06 08:05:10.760116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.388 [2024-11-06 08:05:10.760198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.388 [2024-11-06 08:05:10.760217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.388 [2024-11-06 08:05:10.760229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:48.388 [2024-11-06 08:05:10.760238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.388 [2024-11-06 08:05:10.761877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.839 ms, result 0 00:28:49.323  [2024-11-06T08:05:12.890Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-06T08:05:13.873Z] Copying: 53/1024 [MB] (26 MBps) [2024-11-06T08:05:14.810Z] Copying: 79/1024 [MB] (25 MBps) [2024-11-06T08:05:16.189Z] Copying: 106/1024 [MB] (27 MBps) [2024-11-06T08:05:17.125Z] Copying: 133/1024 [MB] (27 MBps) [2024-11-06T08:05:18.062Z] Copying: 159/1024 [MB] (25 MBps) [2024-11-06T08:05:18.997Z] Copying: 186/1024 [MB] (26 MBps) [2024-11-06T08:05:19.932Z] Copying: 213/1024 [MB] (26 MBps) [2024-11-06T08:05:20.867Z] Copying: 240/1024 [MB] (27 MBps) [2024-11-06T08:05:21.804Z] Copying: 267/1024 [MB] (27 MBps) [2024-11-06T08:05:23.181Z] Copying: 294/1024 [MB] (26 MBps) [2024-11-06T08:05:24.117Z] Copying: 319/1024 [MB] (25 MBps) [2024-11-06T08:05:25.054Z] Copying: 346/1024 [MB] (26 MBps) [2024-11-06T08:05:25.989Z] Copying: 373/1024 [MB] (26 MBps) [2024-11-06T08:05:26.924Z] Copying: 400/1024 [MB] (26 MBps) [2024-11-06T08:05:27.861Z] Copying: 427/1024 [MB] (26 MBps) [2024-11-06T08:05:28.797Z] Copying: 453/1024 [MB] (26 MBps) [2024-11-06T08:05:30.206Z] Copying: 481/1024 [MB] (27 MBps) [2024-11-06T08:05:30.803Z] Copying: 508/1024 [MB] (27 MBps) [2024-11-06T08:05:32.178Z] Copying: 535/1024 [MB] (27 MBps) [2024-11-06T08:05:33.116Z] Copying: 561/1024 [MB] (26 MBps) [2024-11-06T08:05:34.051Z] Copying: 583/1024 [MB] (22 MBps) [2024-11-06T08:05:34.988Z] Copying: 607/1024 [MB] (23 MBps) [2024-11-06T08:05:35.925Z] Copying: 631/1024 [MB] (23 MBps) [2024-11-06T08:05:36.860Z] Copying: 653/1024 [MB] (22 MBps) [2024-11-06T08:05:37.796Z] Copying: 678/1024 [MB] (24 MBps) [2024-11-06T08:05:39.175Z] Copying: 702/1024 [MB] (24 MBps) [2024-11-06T08:05:40.112Z] Copying: 726/1024 [MB] (24 MBps) [2024-11-06T08:05:41.049Z] Copying: 751/1024 [MB] (25 MBps) [2024-11-06T08:05:41.986Z] Copying: 776/1024 [MB] (24 MBps) [2024-11-06T08:05:42.949Z] Copying: 800/1024 [MB] (24 MBps) [2024-11-06T08:05:43.885Z] Copying: 825/1024 [MB] (24 MBps) [2024-11-06T08:05:44.821Z] Copying: 850/1024 [MB] (24 MBps) [2024-11-06T08:05:46.196Z] Copying: 874/1024 [MB] (23 MBps) [2024-11-06T08:05:47.130Z] Copying: 898/1024 [MB] (23 MBps) [2024-11-06T08:05:48.066Z] Copying: 921/1024 [MB] (23 MBps) [2024-11-06T08:05:49.001Z] Copying: 946/1024 [MB] (24 MBps) [2024-11-06T08:05:49.935Z] Copying: 970/1024 [MB] (23 MBps) [2024-11-06T08:05:50.870Z] Copying: 994/1024 [MB] (23 MBps) [2024-11-06T08:05:51.806Z] Copying: 1018/1024 [MB] (24 MBps) [2024-11-06T08:05:51.806Z] Copying: 1048564/1048576 [kB] (5156 kBps) [2024-11-06T08:05:51.806Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-06 08:05:51.798215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.177 [2024-11-06 08:05:51.798346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:29.177 [2024-11-06 08:05:51.798376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:29.177 [2024-11-06 08:05:51.798406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.177 [2024-11-06 08:05:51.802265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:29.435 [2024-11-06 08:05:51.806769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.435 [2024-11-06 08:05:51.806817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:29.435 [2024-11-06 08:05:51.806831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.461 ms 00:29:29.435 [2024-11-06 08:05:51.806841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.435 [2024-11-06 08:05:51.816669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.435 [2024-11-06 08:05:51.816707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:29.435 [2024-11-06 08:05:51.816722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.927 ms 00:29:29.435 [2024-11-06 08:05:51.816733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.435 [2024-11-06 08:05:51.838461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.435 [2024-11-06 08:05:51.838505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:29.435 [2024-11-06 08:05:51.838521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.700 ms 00:29:29.435 [2024-11-06 08:05:51.838532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.435 [2024-11-06 08:05:51.843629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.435 [2024-11-06 08:05:51.843678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:29.435 [2024-11-06 08:05:51.843696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.063 ms 00:29:29.435 [2024-11-06 08:05:51.843707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.435 [2024-11-06 08:05:51.869347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.435 [2024-11-06 08:05:51.869383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:29.435 [2024-11-06 08:05:51.869398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.576 ms 00:29:29.435 [2024-11-06 08:05:51.869408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.435 [2024-11-06 08:05:51.884936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.435 [2024-11-06 08:05:51.884972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:29.435 [2024-11-06 08:05:51.884986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.492 ms 00:29:29.435 [2024-11-06 08:05:51.884997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.436 [2024-11-06 08:05:51.985147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.436 [2024-11-06 08:05:51.985204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:29.436 [2024-11-06 08:05:51.985219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.111 ms 00:29:29.436 [2024-11-06 08:05:51.985236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.436 [2024-11-06 08:05:52.009735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.436 [2024-11-06 08:05:52.009770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:29.436 [2024-11-06 08:05:52.009783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.469 ms 00:29:29.436 [2024-11-06 08:05:52.009793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.436 [2024-11-06 08:05:52.033868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.436 [2024-11-06 08:05:52.033901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:29.436 [2024-11-06 08:05:52.033914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.039 ms 00:29:29.436 [2024-11-06 08:05:52.033924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.436 [2024-11-06 08:05:52.057693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.436 [2024-11-06 08:05:52.057739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:29.436 [2024-11-06 08:05:52.057760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.734 ms 00:29:29.436 [2024-11-06 08:05:52.057769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.696 [2024-11-06 08:05:52.081408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.696 [2024-11-06 08:05:52.081459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:29.696 [2024-11-06 08:05:52.081472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.580 ms 00:29:29.696 [2024-11-06 08:05:52.081482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.696 [2024-11-06 08:05:52.081517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:29.696 [2024-11-06 08:05:52.081546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 113152 / 261120 wr_cnt: 1 state: open 00:29:29.696 [2024-11-06 08:05:52.081559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.081998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:29.696 [2024-11-06 08:05:52.082336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:29.697 [2024-11-06 08:05:52.082614] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:29.697 [2024-11-06 08:05:52.082623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: abdcc990-59fc-4691-b84d-7ee957ef350d 00:29:29.697 [2024-11-06 08:05:52.082634] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 113152 00:29:29.697 [2024-11-06 08:05:52.082649] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 114112 00:29:29.697 [2024-11-06 08:05:52.082670] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 113152 00:29:29.697 [2024-11-06 08:05:52.082685] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0085 00:29:29.697 [2024-11-06 08:05:52.082695] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:29.697 [2024-11-06 08:05:52.082705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:29.697 [2024-11-06 08:05:52.082716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:29.697 [2024-11-06 08:05:52.082724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:29.697 [2024-11-06 08:05:52.082732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:29.697 [2024-11-06 08:05:52.082742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.697 [2024-11-06 08:05:52.082752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:29.697 [2024-11-06 08:05:52.082763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:29:29.697 [2024-11-06 08:05:52.082772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.097122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.697 [2024-11-06 08:05:52.097153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:29.697 [2024-11-06 08:05:52.097166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.329 ms 00:29:29.697 [2024-11-06 08:05:52.097176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.097652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.697 [2024-11-06 08:05:52.097675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:29.697 [2024-11-06 08:05:52.097687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:29:29.697 [2024-11-06 08:05:52.097697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.136135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.136186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:29.697 [2024-11-06 08:05:52.136200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.136210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.136272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.136288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:29.697 [2024-11-06 08:05:52.136300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.136317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.136396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.136413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:29.697 [2024-11-06 08:05:52.136423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.136438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.136457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.136472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:29.697 [2024-11-06 08:05:52.136482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.136492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.225752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.225818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:29.697 [2024-11-06 08:05:52.225837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.225848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.298262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.298328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:29.697 [2024-11-06 08:05:52.298347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.298358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.298479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.298496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:29.697 [2024-11-06 08:05:52.298507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.298517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.298560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.298580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:29.697 [2024-11-06 08:05:52.298591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.298601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.298727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.298749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:29.697 [2024-11-06 08:05:52.298761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.298771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.298817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.298833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:29.697 [2024-11-06 08:05:52.298844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.298855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.298906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.298927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:29.697 [2024-11-06 08:05:52.298938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.298948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.299017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.697 [2024-11-06 08:05:52.299034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:29.697 [2024-11-06 08:05:52.299045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.697 [2024-11-06 08:05:52.299055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.697 [2024-11-06 08:05:52.299219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 502.294 ms, result 0 00:29:31.145 00:29:31.145 00:29:31.145 08:05:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:33.051 08:05:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:33.051 [2024-11-06 08:05:55.550706] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:29:33.051 [2024-11-06 08:05:55.550898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80471 ] 00:29:33.312 [2024-11-06 08:05:55.726159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.312 [2024-11-06 08:05:55.859888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.882 [2024-11-06 08:05:56.230570] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.882 [2024-11-06 08:05:56.230672] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.882 [2024-11-06 08:05:56.393605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.393699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:33.882 [2024-11-06 08:05:56.393725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:33.882 [2024-11-06 08:05:56.393737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.393804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.393824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:33.882 [2024-11-06 08:05:56.393842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:33.882 [2024-11-06 08:05:56.393853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.393882] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:33.882 [2024-11-06 08:05:56.394607] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:33.882 [2024-11-06 08:05:56.394640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.394653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:33.882 [2024-11-06 08:05:56.394665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:29:33.882 [2024-11-06 08:05:56.394676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.397181] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:33.882 [2024-11-06 08:05:56.411645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.411684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:33.882 [2024-11-06 08:05:56.411700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.466 ms 00:29:33.882 [2024-11-06 08:05:56.411712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.411783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.411806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:33.882 [2024-11-06 08:05:56.411819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:33.882 [2024-11-06 08:05:56.411830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.423495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.423543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:33.882 [2024-11-06 08:05:56.423559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.590 ms 00:29:33.882 [2024-11-06 08:05:56.423571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.423685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.423704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:33.882 [2024-11-06 08:05:56.423717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:29:33.882 [2024-11-06 08:05:56.423728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.423820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.423839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:33.882 [2024-11-06 08:05:56.423852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:33.882 [2024-11-06 08:05:56.423864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.423900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:33.882 [2024-11-06 08:05:56.428806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.428838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:33.882 [2024-11-06 08:05:56.428853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:29:33.882 [2024-11-06 08:05:56.428869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-11-06 08:05:56.428905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-11-06 08:05:56.428921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:33.882 [2024-11-06 08:05:56.428934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:33.882 [2024-11-06 08:05:56.428945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-11-06 08:05:56.428990] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:33.883 [2024-11-06 08:05:56.429025] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:33.883 [2024-11-06 08:05:56.429066] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:33.883 [2024-11-06 08:05:56.429115] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:33.883 [2024-11-06 08:05:56.429222] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:33.883 [2024-11-06 08:05:56.429239] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:33.883 [2024-11-06 08:05:56.429254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:33.883 [2024-11-06 08:05:56.429284] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:33.883 [2024-11-06 08:05:56.429300] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:33.883 [2024-11-06 08:05:56.429313] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:33.883 [2024-11-06 08:05:56.429325] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:33.883 [2024-11-06 08:05:56.429336] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:33.883 [2024-11-06 08:05:56.429347] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:33.883 [2024-11-06 08:05:56.429366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-11-06 08:05:56.429379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:33.883 [2024-11-06 08:05:56.429391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:29:33.883 [2024-11-06 08:05:56.429419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-11-06 08:05:56.429511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-11-06 08:05:56.429528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:33.883 [2024-11-06 08:05:56.429540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:33.883 [2024-11-06 08:05:56.429551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-11-06 08:05:56.429658] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:33.883 [2024-11-06 08:05:56.429690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:33.883 [2024-11-06 08:05:56.429703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.883 [2024-11-06 08:05:56.429715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:33.883 [2024-11-06 08:05:56.429735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:33.883 [2024-11-06 08:05:56.429757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:33.883 [2024-11-06 08:05:56.429767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.883 [2024-11-06 08:05:56.429786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:33.883 [2024-11-06 08:05:56.429794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:33.883 [2024-11-06 08:05:56.429803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.883 [2024-11-06 08:05:56.429814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:33.883 [2024-11-06 08:05:56.429824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:33.883 [2024-11-06 08:05:56.429847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:33.883 [2024-11-06 08:05:56.429867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:33.883 [2024-11-06 08:05:56.429877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:33.883 [2024-11-06 08:05:56.429896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.883 [2024-11-06 08:05:56.429916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:33.883 [2024-11-06 08:05:56.429926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.883 [2024-11-06 08:05:56.429945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:33.883 [2024-11-06 08:05:56.429954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.883 [2024-11-06 08:05:56.429973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:33.883 [2024-11-06 08:05:56.429983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:33.883 [2024-11-06 08:05:56.429993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.883 [2024-11-06 08:05:56.430002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:33.883 [2024-11-06 08:05:56.430012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:33.883 [2024-11-06 08:05:56.430021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.883 [2024-11-06 08:05:56.430031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:33.883 [2024-11-06 08:05:56.430041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:33.883 [2024-11-06 08:05:56.430050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.883 [2024-11-06 08:05:56.430061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:33.883 [2024-11-06 08:05:56.430071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:33.883 [2024-11-06 08:05:56.430081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.883 [2024-11-06 08:05:56.430090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:33.883 [2024-11-06 08:05:56.430100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:33.883 [2024-11-06 08:05:56.430108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.883 [2024-11-06 08:05:56.430117] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:33.883 [2024-11-06 08:05:56.430128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:33.883 [2024-11-06 08:05:56.430138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.883 [2024-11-06 08:05:56.430149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.883 [2024-11-06 08:05:56.430161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:33.883 [2024-11-06 08:05:56.430170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:33.883 [2024-11-06 08:05:56.430180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:33.883 [2024-11-06 08:05:56.430190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:33.883 [2024-11-06 08:05:56.430199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:33.883 [2024-11-06 08:05:56.430208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:33.883 [2024-11-06 08:05:56.430220] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:33.883 [2024-11-06 08:05:56.430233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.883 [2024-11-06 08:05:56.430245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:33.883 [2024-11-06 08:05:56.430292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:33.883 [2024-11-06 08:05:56.430304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:33.883 [2024-11-06 08:05:56.430315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:33.883 [2024-11-06 08:05:56.430326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:33.883 [2024-11-06 08:05:56.430336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:33.883 [2024-11-06 08:05:56.430347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:33.884 [2024-11-06 08:05:56.430358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:33.884 [2024-11-06 08:05:56.430368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:33.884 [2024-11-06 08:05:56.430379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:33.884 [2024-11-06 08:05:56.430392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:33.884 [2024-11-06 08:05:56.430403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:33.884 [2024-11-06 08:05:56.430414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:33.884 [2024-11-06 08:05:56.430425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:33.884 [2024-11-06 08:05:56.430436] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:33.884 [2024-11-06 08:05:56.430451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.884 [2024-11-06 08:05:56.430469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:33.884 [2024-11-06 08:05:56.430481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:33.884 [2024-11-06 08:05:56.430493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:33.884 [2024-11-06 08:05:56.430504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:33.884 [2024-11-06 08:05:56.430516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.884 [2024-11-06 08:05:56.430528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:33.884 [2024-11-06 08:05:56.430540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:29:33.884 [2024-11-06 08:05:56.430551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.884 [2024-11-06 08:05:56.471604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.884 [2024-11-06 08:05:56.471695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:33.884 [2024-11-06 08:05:56.471715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.986 ms 00:29:33.884 [2024-11-06 08:05:56.471728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.884 [2024-11-06 08:05:56.471856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.884 [2024-11-06 08:05:56.471880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:33.884 [2024-11-06 08:05:56.471894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:33.884 [2024-11-06 08:05:56.471905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.524001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.524077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:34.144 [2024-11-06 08:05:56.524097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.976 ms 00:29:34.144 [2024-11-06 08:05:56.524109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.524199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.524216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:34.144 [2024-11-06 08:05:56.524230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:34.144 [2024-11-06 08:05:56.524273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.525248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.525292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:34.144 [2024-11-06 08:05:56.525313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:29:34.144 [2024-11-06 08:05:56.525325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.525520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.525545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:34.144 [2024-11-06 08:05:56.525560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:29:34.144 [2024-11-06 08:05:56.525586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.545996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.546054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:34.144 [2024-11-06 08:05:56.546072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.371 ms 00:29:34.144 [2024-11-06 08:05:56.546090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.561630] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:34.144 [2024-11-06 08:05:56.561678] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:34.144 [2024-11-06 08:05:56.561696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.561709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:34.144 [2024-11-06 08:05:56.561722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.371 ms 00:29:34.144 [2024-11-06 08:05:56.561735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.586965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.587025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:34.144 [2024-11-06 08:05:56.587051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.176 ms 00:29:34.144 [2024-11-06 08:05:56.587063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.600549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.600600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:34.144 [2024-11-06 08:05:56.600616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.431 ms 00:29:34.144 [2024-11-06 08:05:56.600627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.613594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.613631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:34.144 [2024-11-06 08:05:56.613646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.925 ms 00:29:34.144 [2024-11-06 08:05:56.613656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.614546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.614601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:34.144 [2024-11-06 08:05:56.614617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:29:34.144 [2024-11-06 08:05:56.614635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.689690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.689777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:34.144 [2024-11-06 08:05:56.689799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.027 ms 00:29:34.144 [2024-11-06 08:05:56.689821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.700700] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:34.144 [2024-11-06 08:05:56.705341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.705374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:34.144 [2024-11-06 08:05:56.705393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.419 ms 00:29:34.144 [2024-11-06 08:05:56.705406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.705548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.705569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:34.144 [2024-11-06 08:05:56.705583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:34.144 [2024-11-06 08:05:56.705594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.707815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.707848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:34.144 [2024-11-06 08:05:56.707862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.145 ms 00:29:34.144 [2024-11-06 08:05:56.707873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.707911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.707928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:34.144 [2024-11-06 08:05:56.707941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:34.144 [2024-11-06 08:05:56.707953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.708001] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:34.144 [2024-11-06 08:05:56.708022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.708034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:34.144 [2024-11-06 08:05:56.708047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:34.144 [2024-11-06 08:05:56.708057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.735413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.735460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:34.144 [2024-11-06 08:05:56.735478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.330 ms 00:29:34.144 [2024-11-06 08:05:56.735490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.735588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:34.144 [2024-11-06 08:05:56.735607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:34.144 [2024-11-06 08:05:56.735620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:34.144 [2024-11-06 08:05:56.735631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:34.144 [2024-11-06 08:05:56.739670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 344.455 ms, result 0 00:29:35.524  [2024-11-06T08:05:59.090Z] Copying: 1080/1048576 [kB] (1080 kBps) [2024-11-06T08:06:00.027Z] Copying: 6692/1048576 [kB] (5612 kBps) [2024-11-06T08:06:00.966Z] Copying: 38/1024 [MB] (32 MBps) [2024-11-06T08:06:02.343Z] Copying: 71/1024 [MB] (32 MBps) [2024-11-06T08:06:03.290Z] Copying: 103/1024 [MB] (32 MBps) [2024-11-06T08:06:04.242Z] Copying: 134/1024 [MB] (31 MBps) [2024-11-06T08:06:05.180Z] Copying: 164/1024 [MB] (29 MBps) [2024-11-06T08:06:06.117Z] Copying: 192/1024 [MB] (28 MBps) [2024-11-06T08:06:07.054Z] Copying: 221/1024 [MB] (28 MBps) [2024-11-06T08:06:07.992Z] Copying: 253/1024 [MB] (32 MBps) [2024-11-06T08:06:08.930Z] Copying: 285/1024 [MB] (32 MBps) [2024-11-06T08:06:10.306Z] Copying: 319/1024 [MB] (33 MBps) [2024-11-06T08:06:11.242Z] Copying: 352/1024 [MB] (33 MBps) [2024-11-06T08:06:12.179Z] Copying: 385/1024 [MB] (33 MBps) [2024-11-06T08:06:13.115Z] Copying: 419/1024 [MB] (33 MBps) [2024-11-06T08:06:14.052Z] Copying: 452/1024 [MB] (33 MBps) [2024-11-06T08:06:14.987Z] Copying: 486/1024 [MB] (33 MBps) [2024-11-06T08:06:16.363Z] Copying: 519/1024 [MB] (33 MBps) [2024-11-06T08:06:16.930Z] Copying: 552/1024 [MB] (33 MBps) [2024-11-06T08:06:18.305Z] Copying: 584/1024 [MB] (31 MBps) [2024-11-06T08:06:19.240Z] Copying: 617/1024 [MB] (33 MBps) [2024-11-06T08:06:20.174Z] Copying: 651/1024 [MB] (33 MBps) [2024-11-06T08:06:21.107Z] Copying: 685/1024 [MB] (33 MBps) [2024-11-06T08:06:22.042Z] Copying: 718/1024 [MB] (33 MBps) [2024-11-06T08:06:22.974Z] Copying: 752/1024 [MB] (33 MBps) [2024-11-06T08:06:24.348Z] Copying: 786/1024 [MB] (33 MBps) [2024-11-06T08:06:25.284Z] Copying: 820/1024 [MB] (34 MBps) [2024-11-06T08:06:26.217Z] Copying: 855/1024 [MB] (34 MBps) [2024-11-06T08:06:27.151Z] Copying: 886/1024 [MB] (31 MBps) [2024-11-06T08:06:28.087Z] Copying: 914/1024 [MB] (28 MBps) [2024-11-06T08:06:29.024Z] Copying: 942/1024 [MB] (27 MBps) [2024-11-06T08:06:29.994Z] Copying: 969/1024 [MB] (27 MBps) [2024-11-06T08:06:30.937Z] Copying: 997/1024 [MB] (27 MBps) [2024-11-06T08:06:31.195Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-06 08:06:31.163438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.566 [2024-11-06 08:06:31.163544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:08.566 [2024-11-06 08:06:31.163579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:08.566 [2024-11-06 08:06:31.163592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.566 [2024-11-06 08:06:31.163623] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:08.566 [2024-11-06 08:06:31.167071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.566 [2024-11-06 08:06:31.167119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:08.566 [2024-11-06 08:06:31.167134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 00:30:08.566 [2024-11-06 08:06:31.167145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.566 [2024-11-06 08:06:31.167452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.566 [2024-11-06 08:06:31.167472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:08.566 [2024-11-06 08:06:31.167493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:30:08.566 [2024-11-06 08:06:31.167506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.566 [2024-11-06 08:06:31.180695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.566 [2024-11-06 08:06:31.180742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:08.566 [2024-11-06 08:06:31.180762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.165 ms 00:30:08.566 [2024-11-06 08:06:31.180790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.566 [2024-11-06 08:06:31.186274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.566 [2024-11-06 08:06:31.186321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:08.566 [2024-11-06 08:06:31.186334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.399 ms 00:30:08.566 [2024-11-06 08:06:31.186353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.825 [2024-11-06 08:06:31.214537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.825 [2024-11-06 08:06:31.214586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:08.825 [2024-11-06 08:06:31.214601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.119 ms 00:30:08.825 [2024-11-06 08:06:31.214611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.825 [2024-11-06 08:06:31.230077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.825 [2024-11-06 08:06:31.230128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:08.825 [2024-11-06 08:06:31.230143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.425 ms 00:30:08.825 [2024-11-06 08:06:31.230155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.825 [2024-11-06 08:06:31.231993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.825 [2024-11-06 08:06:31.232045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:08.825 [2024-11-06 08:06:31.232061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:30:08.825 [2024-11-06 08:06:31.232073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.825 [2024-11-06 08:06:31.258256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.825 [2024-11-06 08:06:31.258336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:08.825 [2024-11-06 08:06:31.258352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.162 ms 00:30:08.825 [2024-11-06 08:06:31.258363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.825 [2024-11-06 08:06:31.283610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.825 [2024-11-06 08:06:31.283659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:08.825 [2024-11-06 08:06:31.283687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.206 ms 00:30:08.825 [2024-11-06 08:06:31.283698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.825 [2024-11-06 08:06:31.308961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.825 [2024-11-06 08:06:31.309010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:08.825 [2024-11-06 08:06:31.309025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.208 ms 00:30:08.825 [2024-11-06 08:06:31.309037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.825 [2024-11-06 08:06:31.334599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.825 [2024-11-06 08:06:31.334648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:08.825 [2024-11-06 08:06:31.334662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.436 ms 00:30:08.825 [2024-11-06 08:06:31.334687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.825 [2024-11-06 08:06:31.334741] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:08.825 [2024-11-06 08:06:31.334764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:08.825 [2024-11-06 08:06:31.334777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:08.825 [2024-11-06 08:06:31.334789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.334998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:08.825 [2024-11-06 08:06:31.335429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:08.826 [2024-11-06 08:06:31.335971] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:08.826 [2024-11-06 08:06:31.335982] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: abdcc990-59fc-4691-b84d-7ee957ef350d 00:30:08.826 [2024-11-06 08:06:31.335993] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:08.826 [2024-11-06 08:06:31.336003] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 151488 00:30:08.826 [2024-11-06 08:06:31.336014] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 149504 00:30:08.826 [2024-11-06 08:06:31.336030] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0133 00:30:08.826 [2024-11-06 08:06:31.336046] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:08.826 [2024-11-06 08:06:31.336057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:08.826 [2024-11-06 08:06:31.336068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:08.826 [2024-11-06 08:06:31.336089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:08.826 [2024-11-06 08:06:31.336099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:08.826 [2024-11-06 08:06:31.336110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.826 [2024-11-06 08:06:31.336132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:08.826 [2024-11-06 08:06:31.336144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:30:08.826 [2024-11-06 08:06:31.336155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.826 [2024-11-06 08:06:31.350483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.826 [2024-11-06 08:06:31.350516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:08.826 [2024-11-06 08:06:31.350539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.287 ms 00:30:08.826 [2024-11-06 08:06:31.350551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.826 [2024-11-06 08:06:31.351041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.826 [2024-11-06 08:06:31.351066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:08.826 [2024-11-06 08:06:31.351079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:30:08.826 [2024-11-06 08:06:31.351091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.826 [2024-11-06 08:06:31.388303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.826 [2024-11-06 08:06:31.388357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.826 [2024-11-06 08:06:31.388373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.826 [2024-11-06 08:06:31.388385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.826 [2024-11-06 08:06:31.388443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.826 [2024-11-06 08:06:31.388458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.826 [2024-11-06 08:06:31.388471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.826 [2024-11-06 08:06:31.388482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.826 [2024-11-06 08:06:31.388598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.826 [2024-11-06 08:06:31.388656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.826 [2024-11-06 08:06:31.388669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.826 [2024-11-06 08:06:31.388682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.826 [2024-11-06 08:06:31.388706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:08.826 [2024-11-06 08:06:31.388722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.826 [2024-11-06 08:06:31.388740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:08.826 [2024-11-06 08:06:31.388752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.478598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.085 [2024-11-06 08:06:31.478671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:09.085 [2024-11-06 08:06:31.478689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.085 [2024-11-06 08:06:31.478701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.549865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.085 [2024-11-06 08:06:31.549944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:09.085 [2024-11-06 08:06:31.549961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.085 [2024-11-06 08:06:31.549973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.550045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.085 [2024-11-06 08:06:31.550061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:09.085 [2024-11-06 08:06:31.550081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.085 [2024-11-06 08:06:31.550092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.550166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.085 [2024-11-06 08:06:31.550183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:09.085 [2024-11-06 08:06:31.550194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.085 [2024-11-06 08:06:31.550204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.550388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.085 [2024-11-06 08:06:31.550412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:09.085 [2024-11-06 08:06:31.550425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.085 [2024-11-06 08:06:31.550443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.550494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.085 [2024-11-06 08:06:31.550513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:09.085 [2024-11-06 08:06:31.550526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.085 [2024-11-06 08:06:31.550538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.550595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.085 [2024-11-06 08:06:31.550611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:09.085 [2024-11-06 08:06:31.550623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.085 [2024-11-06 08:06:31.550640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.550695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.085 [2024-11-06 08:06:31.550713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:09.085 [2024-11-06 08:06:31.550731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.085 [2024-11-06 08:06:31.550742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.085 [2024-11-06 08:06:31.550900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 387.412 ms, result 0 00:30:10.024 00:30:10.024 00:30:10.024 08:06:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:11.969 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:11.969 08:06:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:11.969 [2024-11-06 08:06:34.353605] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:30:11.969 [2024-11-06 08:06:34.353792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80874 ] 00:30:11.969 [2024-11-06 08:06:34.543911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.227 [2024-11-06 08:06:34.692020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.486 [2024-11-06 08:06:35.040438] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:12.486 [2024-11-06 08:06:35.040554] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:12.746 [2024-11-06 08:06:35.212551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.746 [2024-11-06 08:06:35.212604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:12.746 [2024-11-06 08:06:35.212629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:12.747 [2024-11-06 08:06:35.212639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.212701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.212720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:12.747 [2024-11-06 08:06:35.212735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:12.747 [2024-11-06 08:06:35.212745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.212774] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:12.747 [2024-11-06 08:06:35.213786] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:12.747 [2024-11-06 08:06:35.213835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.213850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:12.747 [2024-11-06 08:06:35.213878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:30:12.747 [2024-11-06 08:06:35.213905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.216046] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:12.747 [2024-11-06 08:06:35.230094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.230138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:12.747 [2024-11-06 08:06:35.230155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.049 ms 00:30:12.747 [2024-11-06 08:06:35.230165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.230229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.230265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:12.747 [2024-11-06 08:06:35.230280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:12.747 [2024-11-06 08:06:35.230291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.239171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.239208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:12.747 [2024-11-06 08:06:35.239223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.811 ms 00:30:12.747 [2024-11-06 08:06:35.239233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.239335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.239354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:12.747 [2024-11-06 08:06:35.239366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:12.747 [2024-11-06 08:06:35.239376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.239457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.239491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:12.747 [2024-11-06 08:06:35.239519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:12.747 [2024-11-06 08:06:35.239530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.239563] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:12.747 [2024-11-06 08:06:35.244062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.244116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:12.747 [2024-11-06 08:06:35.244131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.508 ms 00:30:12.747 [2024-11-06 08:06:35.244146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.244188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.244204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:12.747 [2024-11-06 08:06:35.244217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:12.747 [2024-11-06 08:06:35.244227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.244297] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:12.747 [2024-11-06 08:06:35.244330] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:12.747 [2024-11-06 08:06:35.244384] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:12.747 [2024-11-06 08:06:35.244423] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:12.747 [2024-11-06 08:06:35.244528] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:12.747 [2024-11-06 08:06:35.244545] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:12.747 [2024-11-06 08:06:35.244559] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:12.747 [2024-11-06 08:06:35.244574] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:12.747 [2024-11-06 08:06:35.244586] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:12.747 [2024-11-06 08:06:35.244598] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:12.747 [2024-11-06 08:06:35.244609] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:12.747 [2024-11-06 08:06:35.244619] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:12.747 [2024-11-06 08:06:35.244630] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:12.747 [2024-11-06 08:06:35.244648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.244674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:12.747 [2024-11-06 08:06:35.244686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:30:12.747 [2024-11-06 08:06:35.244696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.244784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.747 [2024-11-06 08:06:35.244801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:12.747 [2024-11-06 08:06:35.244812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:12.747 [2024-11-06 08:06:35.244823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.747 [2024-11-06 08:06:35.244931] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:12.747 [2024-11-06 08:06:35.244956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:12.747 [2024-11-06 08:06:35.244969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:12.747 [2024-11-06 08:06:35.244989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:12.747 [2024-11-06 08:06:35.245012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:12.747 [2024-11-06 08:06:35.245033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:12.747 [2024-11-06 08:06:35.245043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:12.747 [2024-11-06 08:06:35.245062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:12.747 [2024-11-06 08:06:35.245073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:12.747 [2024-11-06 08:06:35.245111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:12.747 [2024-11-06 08:06:35.245143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:12.747 [2024-11-06 08:06:35.245154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:12.747 [2024-11-06 08:06:35.245177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:12.747 [2024-11-06 08:06:35.245199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:12.747 [2024-11-06 08:06:35.245209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:12.747 [2024-11-06 08:06:35.245230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.747 [2024-11-06 08:06:35.245251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:12.747 [2024-11-06 08:06:35.245261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.747 [2024-11-06 08:06:35.245299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:12.747 [2024-11-06 08:06:35.245310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.747 [2024-11-06 08:06:35.245331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:12.747 [2024-11-06 08:06:35.245341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:12.747 [2024-11-06 08:06:35.245362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:12.747 [2024-11-06 08:06:35.245372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:12.747 [2024-11-06 08:06:35.245382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:12.747 [2024-11-06 08:06:35.245393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:12.747 [2024-11-06 08:06:35.245403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:12.748 [2024-11-06 08:06:35.245413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:12.748 [2024-11-06 08:06:35.245423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:12.748 [2024-11-06 08:06:35.245434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:12.748 [2024-11-06 08:06:35.245445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.748 [2024-11-06 08:06:35.245455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:12.748 [2024-11-06 08:06:35.245466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:12.748 [2024-11-06 08:06:35.245476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.748 [2024-11-06 08:06:35.245486] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:12.748 [2024-11-06 08:06:35.245497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:12.748 [2024-11-06 08:06:35.245509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:12.748 [2024-11-06 08:06:35.245521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:12.748 [2024-11-06 08:06:35.245533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:12.748 [2024-11-06 08:06:35.245544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:12.748 [2024-11-06 08:06:35.245554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:12.748 [2024-11-06 08:06:35.245564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:12.748 [2024-11-06 08:06:35.245575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:12.748 [2024-11-06 08:06:35.245600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:12.748 [2024-11-06 08:06:35.245612] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:12.748 [2024-11-06 08:06:35.245625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:12.748 [2024-11-06 08:06:35.245651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:12.748 [2024-11-06 08:06:35.245662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:12.748 [2024-11-06 08:06:35.245672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:12.748 [2024-11-06 08:06:35.245682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:12.748 [2024-11-06 08:06:35.245693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:12.748 [2024-11-06 08:06:35.245703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:12.748 [2024-11-06 08:06:35.245713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:12.748 [2024-11-06 08:06:35.245723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:12.748 [2024-11-06 08:06:35.245733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:12.748 [2024-11-06 08:06:35.245743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:12.748 [2024-11-06 08:06:35.245754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:12.748 [2024-11-06 08:06:35.245764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:12.748 [2024-11-06 08:06:35.245774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:12.748 [2024-11-06 08:06:35.245785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:12.748 [2024-11-06 08:06:35.245795] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:12.748 [2024-11-06 08:06:35.245806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:12.748 [2024-11-06 08:06:35.245823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:12.748 [2024-11-06 08:06:35.245834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:12.748 [2024-11-06 08:06:35.245845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:12.748 [2024-11-06 08:06:35.245855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:12.748 [2024-11-06 08:06:35.245867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.245878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:12.748 [2024-11-06 08:06:35.245889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:30:12.748 [2024-11-06 08:06:35.245900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.748 [2024-11-06 08:06:35.280319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.280374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:12.748 [2024-11-06 08:06:35.280391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.357 ms 00:30:12.748 [2024-11-06 08:06:35.280401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.748 [2024-11-06 08:06:35.280497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.280518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:12.748 [2024-11-06 08:06:35.280530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:12.748 [2024-11-06 08:06:35.280539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.748 [2024-11-06 08:06:35.327461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.327514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:12.748 [2024-11-06 08:06:35.327531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.847 ms 00:30:12.748 [2024-11-06 08:06:35.327541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.748 [2024-11-06 08:06:35.327595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.327612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:12.748 [2024-11-06 08:06:35.327624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:12.748 [2024-11-06 08:06:35.327640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.748 [2024-11-06 08:06:35.328369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.328412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:12.748 [2024-11-06 08:06:35.328426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:30:12.748 [2024-11-06 08:06:35.328437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.748 [2024-11-06 08:06:35.328596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.328631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:12.748 [2024-11-06 08:06:35.328673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:30:12.748 [2024-11-06 08:06:35.328685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.748 [2024-11-06 08:06:35.345598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.345642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:12.748 [2024-11-06 08:06:35.345657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.866 ms 00:30:12.748 [2024-11-06 08:06:35.345672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:12.748 [2024-11-06 08:06:35.359465] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:12.748 [2024-11-06 08:06:35.359509] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:12.748 [2024-11-06 08:06:35.359526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:12.748 [2024-11-06 08:06:35.359537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:12.748 [2024-11-06 08:06:35.359549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.733 ms 00:30:12.748 [2024-11-06 08:06:35.359559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.125 [2024-11-06 08:06:35.384714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.384778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:13.126 [2024-11-06 08:06:35.384794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.112 ms 00:30:13.126 [2024-11-06 08:06:35.384809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.397804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.397862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:13.126 [2024-11-06 08:06:35.397878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.951 ms 00:30:13.126 [2024-11-06 08:06:35.397889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.410738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.410795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:13.126 [2024-11-06 08:06:35.410809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.808 ms 00:30:13.126 [2024-11-06 08:06:35.410819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.411627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.411696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:13.126 [2024-11-06 08:06:35.411710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:30:13.126 [2024-11-06 08:06:35.411725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.477671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.477764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:13.126 [2024-11-06 08:06:35.477783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.921 ms 00:30:13.126 [2024-11-06 08:06:35.477801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.488284] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:13.126 [2024-11-06 08:06:35.491358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.491406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:13.126 [2024-11-06 08:06:35.491422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.471 ms 00:30:13.126 [2024-11-06 08:06:35.491433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.491541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.491562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:13.126 [2024-11-06 08:06:35.491575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:13.126 [2024-11-06 08:06:35.491586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.492694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.492743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:13.126 [2024-11-06 08:06:35.492756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:30:13.126 [2024-11-06 08:06:35.492767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.492801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.492816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:13.126 [2024-11-06 08:06:35.492828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:13.126 [2024-11-06 08:06:35.492838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.492881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:13.126 [2024-11-06 08:06:35.492901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.492912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:13.126 [2024-11-06 08:06:35.492923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:13.126 [2024-11-06 08:06:35.492932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.519256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.519317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:13.126 [2024-11-06 08:06:35.519333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.278 ms 00:30:13.126 [2024-11-06 08:06:35.519344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.519428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:13.126 [2024-11-06 08:06:35.519446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:13.126 [2024-11-06 08:06:35.519458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:13.126 [2024-11-06 08:06:35.519467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:13.126 [2024-11-06 08:06:35.521081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.947 ms, result 0 00:30:14.063  [2024-11-06T08:06:38.080Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-06T08:06:39.015Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-06T08:06:39.950Z] Copying: 66/1024 [MB] (22 MBps) [2024-11-06T08:06:40.883Z] Copying: 88/1024 [MB] (22 MBps) [2024-11-06T08:06:41.817Z] Copying: 110/1024 [MB] (22 MBps) [2024-11-06T08:06:42.752Z] Copying: 133/1024 [MB] (22 MBps) [2024-11-06T08:06:43.687Z] Copying: 155/1024 [MB] (22 MBps) [2024-11-06T08:06:45.062Z] Copying: 177/1024 [MB] (22 MBps) [2024-11-06T08:06:45.997Z] Copying: 200/1024 [MB] (22 MBps) [2024-11-06T08:06:46.931Z] Copying: 223/1024 [MB] (22 MBps) [2024-11-06T08:06:47.865Z] Copying: 245/1024 [MB] (22 MBps) [2024-11-06T08:06:48.800Z] Copying: 268/1024 [MB] (22 MBps) [2024-11-06T08:06:49.735Z] Copying: 290/1024 [MB] (22 MBps) [2024-11-06T08:06:51.108Z] Copying: 312/1024 [MB] (22 MBps) [2024-11-06T08:06:52.043Z] Copying: 335/1024 [MB] (22 MBps) [2024-11-06T08:06:52.997Z] Copying: 357/1024 [MB] (22 MBps) [2024-11-06T08:06:53.933Z] Copying: 379/1024 [MB] (22 MBps) [2024-11-06T08:06:54.867Z] Copying: 401/1024 [MB] (22 MBps) [2024-11-06T08:06:55.803Z] Copying: 423/1024 [MB] (22 MBps) [2024-11-06T08:06:56.736Z] Copying: 445/1024 [MB] (22 MBps) [2024-11-06T08:06:58.110Z] Copying: 467/1024 [MB] (22 MBps) [2024-11-06T08:06:59.045Z] Copying: 490/1024 [MB] (22 MBps) [2024-11-06T08:06:59.981Z] Copying: 512/1024 [MB] (22 MBps) [2024-11-06T08:07:00.918Z] Copying: 534/1024 [MB] (22 MBps) [2024-11-06T08:07:01.854Z] Copying: 557/1024 [MB] (22 MBps) [2024-11-06T08:07:02.792Z] Copying: 579/1024 [MB] (22 MBps) [2024-11-06T08:07:03.730Z] Copying: 602/1024 [MB] (22 MBps) [2024-11-06T08:07:05.109Z] Copying: 624/1024 [MB] (21 MBps) [2024-11-06T08:07:06.048Z] Copying: 646/1024 [MB] (22 MBps) [2024-11-06T08:07:07.041Z] Copying: 668/1024 [MB] (22 MBps) [2024-11-06T08:07:07.978Z] Copying: 691/1024 [MB] (22 MBps) [2024-11-06T08:07:08.915Z] Copying: 713/1024 [MB] (22 MBps) [2024-11-06T08:07:09.851Z] Copying: 735/1024 [MB] (22 MBps) [2024-11-06T08:07:10.787Z] Copying: 757/1024 [MB] (22 MBps) [2024-11-06T08:07:11.724Z] Copying: 780/1024 [MB] (23 MBps) [2024-11-06T08:07:13.101Z] Copying: 803/1024 [MB] (22 MBps) [2024-11-06T08:07:14.037Z] Copying: 826/1024 [MB] (22 MBps) [2024-11-06T08:07:14.974Z] Copying: 849/1024 [MB] (22 MBps) [2024-11-06T08:07:15.910Z] Copying: 871/1024 [MB] (22 MBps) [2024-11-06T08:07:16.846Z] Copying: 894/1024 [MB] (22 MBps) [2024-11-06T08:07:17.783Z] Copying: 916/1024 [MB] (22 MBps) [2024-11-06T08:07:18.720Z] Copying: 939/1024 [MB] (22 MBps) [2024-11-06T08:07:20.097Z] Copying: 962/1024 [MB] (22 MBps) [2024-11-06T08:07:21.033Z] Copying: 984/1024 [MB] (22 MBps) [2024-11-06T08:07:21.635Z] Copying: 1007/1024 [MB] (22 MBps) [2024-11-06T08:07:21.635Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-06 08:07:21.523802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.006 [2024-11-06 08:07:21.523908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:59.006 [2024-11-06 08:07:21.523936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:59.006 [2024-11-06 08:07:21.523953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.006 [2024-11-06 08:07:21.523996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:59.006 [2024-11-06 08:07:21.529323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.006 [2024-11-06 08:07:21.529374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:59.006 [2024-11-06 08:07:21.529395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.293 ms 00:30:59.006 [2024-11-06 08:07:21.529422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.006 [2024-11-06 08:07:21.529776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.006 [2024-11-06 08:07:21.529813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:59.006 [2024-11-06 08:07:21.529833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:30:59.006 [2024-11-06 08:07:21.529849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.006 [2024-11-06 08:07:21.534009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.006 [2024-11-06 08:07:21.534037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:59.006 [2024-11-06 08:07:21.534049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.132 ms 00:30:59.006 [2024-11-06 08:07:21.534059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.006 [2024-11-06 08:07:21.539314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.006 [2024-11-06 08:07:21.539344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:59.006 [2024-11-06 08:07:21.539355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.229 ms 00:30:59.006 [2024-11-06 08:07:21.539365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.006 [2024-11-06 08:07:21.565071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.006 [2024-11-06 08:07:21.565118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:59.006 [2024-11-06 08:07:21.565134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.642 ms 00:30:59.006 [2024-11-06 08:07:21.565144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.006 [2024-11-06 08:07:21.580019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.006 [2024-11-06 08:07:21.580057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:59.006 [2024-11-06 08:07:21.580073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.835 ms 00:30:59.006 [2024-11-06 08:07:21.580083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.006 [2024-11-06 08:07:21.582091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.006 [2024-11-06 08:07:21.582151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:59.006 [2024-11-06 08:07:21.582166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.966 ms 00:30:59.007 [2024-11-06 08:07:21.582176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.007 [2024-11-06 08:07:21.606569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.007 [2024-11-06 08:07:21.606607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:59.007 [2024-11-06 08:07:21.606621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.359 ms 00:30:59.007 [2024-11-06 08:07:21.606630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.007 [2024-11-06 08:07:21.630944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.007 [2024-11-06 08:07:21.631008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:59.007 [2024-11-06 08:07:21.631023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.275 ms 00:30:59.007 [2024-11-06 08:07:21.631033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.267 [2024-11-06 08:07:21.656019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.267 [2024-11-06 08:07:21.656057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:59.267 [2024-11-06 08:07:21.656070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.946 ms 00:30:59.267 [2024-11-06 08:07:21.656081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.267 [2024-11-06 08:07:21.679835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.267 [2024-11-06 08:07:21.679872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:59.267 [2024-11-06 08:07:21.679886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.693 ms 00:30:59.267 [2024-11-06 08:07:21.679895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.267 [2024-11-06 08:07:21.679933] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:59.267 [2024-11-06 08:07:21.679953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:59.267 [2024-11-06 08:07:21.679972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:59.267 [2024-11-06 08:07:21.679983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.679993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:59.267 [2024-11-06 08:07:21.680836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.680999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.681009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.681020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.681030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.681041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.681051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.681062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.681072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:59.268 [2024-11-06 08:07:21.681101] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:59.268 [2024-11-06 08:07:21.681129] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: abdcc990-59fc-4691-b84d-7ee957ef350d 00:30:59.268 [2024-11-06 08:07:21.681146] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:59.268 [2024-11-06 08:07:21.681156] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:59.268 [2024-11-06 08:07:21.681166] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:59.268 [2024-11-06 08:07:21.681177] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:59.268 [2024-11-06 08:07:21.681187] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:59.268 [2024-11-06 08:07:21.681198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:59.268 [2024-11-06 08:07:21.681220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:59.268 [2024-11-06 08:07:21.681229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:59.268 [2024-11-06 08:07:21.681238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:59.268 [2024-11-06 08:07:21.681249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.268 [2024-11-06 08:07:21.681260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:59.268 [2024-11-06 08:07:21.681285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:30:59.268 [2024-11-06 08:07:21.681296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.695128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.268 [2024-11-06 08:07:21.695161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:59.268 [2024-11-06 08:07:21.695175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.809 ms 00:30:59.268 [2024-11-06 08:07:21.695185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.695721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:59.268 [2024-11-06 08:07:21.695751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:59.268 [2024-11-06 08:07:21.695771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:30:59.268 [2024-11-06 08:07:21.695783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.731404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.731443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:59.268 [2024-11-06 08:07:21.731456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.731465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.731515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.731529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:59.268 [2024-11-06 08:07:21.731545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.731555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.731641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.731659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:59.268 [2024-11-06 08:07:21.731686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.731727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.731749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.731762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:59.268 [2024-11-06 08:07:21.731774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.731790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.815821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.815879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:59.268 [2024-11-06 08:07:21.815894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.815904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.887576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.887665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:59.268 [2024-11-06 08:07:21.887681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.887698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.887770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.887787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:59.268 [2024-11-06 08:07:21.887799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.887826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.887942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.887959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:59.268 [2024-11-06 08:07:21.887972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.887983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.888108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.888134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:59.268 [2024-11-06 08:07:21.888146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.888158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.888213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.888231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:59.268 [2024-11-06 08:07:21.888244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.888300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.888354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.888370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:59.268 [2024-11-06 08:07:21.888382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.888393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.888446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:59.268 [2024-11-06 08:07:21.888463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:59.268 [2024-11-06 08:07:21.888475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:59.268 [2024-11-06 08:07:21.888486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:59.268 [2024-11-06 08:07:21.888648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 364.818 ms, result 0 00:31:00.205 00:31:00.205 00:31:00.205 08:07:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:02.109 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:31:02.109 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:31:02.109 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:31:02.109 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:02.109 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:02.109 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:02.368 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:02.368 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:31:02.368 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79006 00:31:02.368 08:07:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 79006 ']' 00:31:02.368 08:07:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 79006 00:31:02.368 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79006) - No such process 00:31:02.368 Process with pid 79006 is not found 00:31:02.368 08:07:24 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 79006 is not found' 00:31:02.368 08:07:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:31:02.626 08:07:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:31:02.626 Remove shared memory files 00:31:02.626 08:07:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:02.626 08:07:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:02.626 08:07:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:02.626 08:07:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:31:02.626 08:07:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:02.626 08:07:25 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:02.626 00:31:02.627 real 3m49.225s 00:31:02.627 user 4m24.646s 00:31:02.627 sys 0m36.562s 00:31:02.627 08:07:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:02.627 08:07:25 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:02.627 ************************************ 00:31:02.627 END TEST ftl_dirty_shutdown 00:31:02.627 ************************************ 00:31:02.627 08:07:25 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:02.627 08:07:25 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:02.627 08:07:25 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:02.627 08:07:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:02.627 ************************************ 00:31:02.627 START TEST ftl_upgrade_shutdown 00:31:02.627 ************************************ 00:31:02.627 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:31:02.627 * Looking for test storage... 00:31:02.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:02.627 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:02.627 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:31:02.627 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.886 --rc genhtml_branch_coverage=1 00:31:02.886 --rc genhtml_function_coverage=1 00:31:02.886 --rc genhtml_legend=1 00:31:02.886 --rc geninfo_all_blocks=1 00:31:02.886 --rc geninfo_unexecuted_blocks=1 00:31:02.886 00:31:02.886 ' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.886 --rc genhtml_branch_coverage=1 00:31:02.886 --rc genhtml_function_coverage=1 00:31:02.886 --rc genhtml_legend=1 00:31:02.886 --rc geninfo_all_blocks=1 00:31:02.886 --rc geninfo_unexecuted_blocks=1 00:31:02.886 00:31:02.886 ' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.886 --rc genhtml_branch_coverage=1 00:31:02.886 --rc genhtml_function_coverage=1 00:31:02.886 --rc genhtml_legend=1 00:31:02.886 --rc geninfo_all_blocks=1 00:31:02.886 --rc geninfo_unexecuted_blocks=1 00:31:02.886 00:31:02.886 ' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:02.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:02.886 --rc genhtml_branch_coverage=1 00:31:02.886 --rc genhtml_function_coverage=1 00:31:02.886 --rc genhtml_legend=1 00:31:02.886 --rc geninfo_all_blocks=1 00:31:02.886 --rc geninfo_unexecuted_blocks=1 00:31:02.886 00:31:02.886 ' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:31:02.886 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81446 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81446 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81446 ']' 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:02.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:02.887 08:07:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:03.146 [2024-11-06 08:07:25.528229] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:31:03.146 [2024-11-06 08:07:25.528459] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81446 ] 00:31:03.146 [2024-11-06 08:07:25.723470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.405 [2024-11-06 08:07:25.871923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:04.342 08:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:31:04.601 08:07:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:04.601 { 00:31:04.601 "name": "basen1", 00:31:04.601 "aliases": [ 00:31:04.601 "101d89c9-e625-4864-876e-57fcb0aeddde" 00:31:04.601 ], 00:31:04.601 "product_name": "NVMe disk", 00:31:04.601 "block_size": 4096, 00:31:04.601 "num_blocks": 1310720, 00:31:04.601 "uuid": "101d89c9-e625-4864-876e-57fcb0aeddde", 00:31:04.601 "numa_id": -1, 00:31:04.601 "assigned_rate_limits": { 00:31:04.601 "rw_ios_per_sec": 0, 00:31:04.601 "rw_mbytes_per_sec": 0, 00:31:04.601 "r_mbytes_per_sec": 0, 00:31:04.601 "w_mbytes_per_sec": 0 00:31:04.601 }, 00:31:04.601 "claimed": true, 00:31:04.601 "claim_type": "read_many_write_one", 00:31:04.601 "zoned": false, 00:31:04.601 "supported_io_types": { 00:31:04.601 "read": true, 00:31:04.601 "write": true, 00:31:04.601 "unmap": true, 00:31:04.601 "flush": true, 00:31:04.601 "reset": true, 00:31:04.601 "nvme_admin": true, 00:31:04.601 "nvme_io": true, 00:31:04.601 "nvme_io_md": false, 00:31:04.601 "write_zeroes": true, 00:31:04.601 "zcopy": false, 00:31:04.601 "get_zone_info": false, 00:31:04.601 "zone_management": false, 00:31:04.601 "zone_append": false, 00:31:04.601 "compare": true, 00:31:04.601 "compare_and_write": false, 00:31:04.601 "abort": true, 00:31:04.601 "seek_hole": false, 00:31:04.601 "seek_data": false, 00:31:04.601 "copy": true, 00:31:04.601 "nvme_iov_md": false 00:31:04.601 }, 00:31:04.601 "driver_specific": { 00:31:04.601 "nvme": [ 00:31:04.601 { 00:31:04.601 "pci_address": "0000:00:11.0", 00:31:04.601 "trid": { 00:31:04.601 "trtype": "PCIe", 00:31:04.601 "traddr": "0000:00:11.0" 00:31:04.601 }, 00:31:04.601 "ctrlr_data": { 00:31:04.601 "cntlid": 0, 00:31:04.601 "vendor_id": "0x1b36", 00:31:04.601 "model_number": "QEMU NVMe Ctrl", 00:31:04.601 "serial_number": "12341", 00:31:04.601 "firmware_revision": "8.0.0", 00:31:04.601 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:04.601 "oacs": { 00:31:04.601 "security": 0, 00:31:04.601 "format": 1, 00:31:04.601 "firmware": 0, 00:31:04.601 "ns_manage": 1 00:31:04.601 }, 00:31:04.601 "multi_ctrlr": false, 00:31:04.601 "ana_reporting": false 00:31:04.601 }, 00:31:04.601 "vs": { 00:31:04.601 "nvme_version": "1.4" 00:31:04.601 }, 00:31:04.601 "ns_data": { 00:31:04.601 "id": 1, 00:31:04.601 "can_share": false 00:31:04.601 } 00:31:04.601 } 00:31:04.601 ], 00:31:04.601 "mp_policy": "active_passive" 00:31:04.601 } 00:31:04.601 } 00:31:04.601 ]' 00:31:04.601 08:07:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:04.860 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:05.119 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=9b277449-1318-4efc-8667-56dca41a5e61 00:31:05.119 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:05.119 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b277449-1318-4efc-8667-56dca41a5e61 00:31:05.378 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:31:05.378 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=5c22f57b-f155-4f94-8c9a-55efcc8343ed 00:31:05.378 08:07:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 5c22f57b-f155-4f94-8c9a-55efcc8343ed 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2b48ff5d-4af9-4086-a959-3adb696a96e2 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2b48ff5d-4af9-4086-a959-3adb696a96e2 ]] 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2b48ff5d-4af9-4086-a959-3adb696a96e2 5120 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2b48ff5d-4af9-4086-a959-3adb696a96e2 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2b48ff5d-4af9-4086-a959-3adb696a96e2 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=2b48ff5d-4af9-4086-a959-3adb696a96e2 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:05.637 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2b48ff5d-4af9-4086-a959-3adb696a96e2 00:31:05.896 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:05.896 { 00:31:05.896 "name": "2b48ff5d-4af9-4086-a959-3adb696a96e2", 00:31:05.896 "aliases": [ 00:31:05.896 "lvs/basen1p0" 00:31:05.896 ], 00:31:05.896 "product_name": "Logical Volume", 00:31:05.896 "block_size": 4096, 00:31:05.896 "num_blocks": 5242880, 00:31:05.896 "uuid": "2b48ff5d-4af9-4086-a959-3adb696a96e2", 00:31:05.896 "assigned_rate_limits": { 00:31:05.896 "rw_ios_per_sec": 0, 00:31:05.896 "rw_mbytes_per_sec": 0, 00:31:05.896 "r_mbytes_per_sec": 0, 00:31:05.896 "w_mbytes_per_sec": 0 00:31:05.896 }, 00:31:05.896 "claimed": false, 00:31:05.896 "zoned": false, 00:31:05.896 "supported_io_types": { 00:31:05.896 "read": true, 00:31:05.896 "write": true, 00:31:05.896 "unmap": true, 00:31:05.896 "flush": false, 00:31:05.896 "reset": true, 00:31:05.896 "nvme_admin": false, 00:31:05.896 "nvme_io": false, 00:31:05.896 "nvme_io_md": false, 00:31:05.896 "write_zeroes": true, 00:31:05.896 "zcopy": false, 00:31:05.896 "get_zone_info": false, 00:31:05.896 "zone_management": false, 00:31:05.896 "zone_append": false, 00:31:05.896 "compare": false, 00:31:05.896 "compare_and_write": false, 00:31:05.896 "abort": false, 00:31:05.896 "seek_hole": true, 00:31:05.896 "seek_data": true, 00:31:05.896 "copy": false, 00:31:05.896 "nvme_iov_md": false 00:31:05.896 }, 00:31:05.896 "driver_specific": { 00:31:05.896 "lvol": { 00:31:05.896 "lvol_store_uuid": "5c22f57b-f155-4f94-8c9a-55efcc8343ed", 00:31:05.896 "base_bdev": "basen1", 00:31:05.896 "thin_provision": true, 00:31:05.896 "num_allocated_clusters": 0, 00:31:05.896 "snapshot": false, 00:31:05.896 "clone": false, 00:31:05.896 "esnap_clone": false 00:31:05.896 } 00:31:05.896 } 00:31:05.896 } 00:31:05.896 ]' 00:31:05.896 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:05.896 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:05.896 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:06.155 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:31:06.155 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:31:06.155 08:07:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:31:06.155 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:31:06.155 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:06.155 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:31:06.413 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:31:06.413 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:31:06.413 08:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:31:06.671 08:07:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:31:06.671 08:07:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:31:06.671 08:07:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2b48ff5d-4af9-4086-a959-3adb696a96e2 -c cachen1p0 --l2p_dram_limit 2 00:31:06.931 [2024-11-06 08:07:29.311598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.311648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:06.931 [2024-11-06 08:07:29.311668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:06.931 [2024-11-06 08:07:29.311679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.311737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.311755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:06.931 [2024-11-06 08:07:29.311768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:31:06.931 [2024-11-06 08:07:29.311778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.311806] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:06.931 [2024-11-06 08:07:29.312533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:06.931 [2024-11-06 08:07:29.312568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.312581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:06.931 [2024-11-06 08:07:29.312596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.767 ms 00:31:06.931 [2024-11-06 08:07:29.312606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.312687] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 0a382153-ee62-4328-afb1-a35c2c52f33e 00:31:06.931 [2024-11-06 08:07:29.314547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.314583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:31:06.931 [2024-11-06 08:07:29.314597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:06.931 [2024-11-06 08:07:29.314609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.323659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.323701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:06.931 [2024-11-06 08:07:29.323715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.002 ms 00:31:06.931 [2024-11-06 08:07:29.323731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.323784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.323803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:06.931 [2024-11-06 08:07:29.323815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:06.931 [2024-11-06 08:07:29.323830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.323900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.323921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:06.931 [2024-11-06 08:07:29.323933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:06.931 [2024-11-06 08:07:29.323946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.323978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:06.931 [2024-11-06 08:07:29.328385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.328419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:06.931 [2024-11-06 08:07:29.328439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.412 ms 00:31:06.931 [2024-11-06 08:07:29.328451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.328485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.931 [2024-11-06 08:07:29.328499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:06.931 [2024-11-06 08:07:29.328512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:06.931 [2024-11-06 08:07:29.328522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.931 [2024-11-06 08:07:29.328565] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:31:06.931 [2024-11-06 08:07:29.328696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:06.931 [2024-11-06 08:07:29.328717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:06.931 [2024-11-06 08:07:29.328731] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:06.932 [2024-11-06 08:07:29.328746] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:06.932 [2024-11-06 08:07:29.328758] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:06.932 [2024-11-06 08:07:29.328771] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:06.932 [2024-11-06 08:07:29.328780] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:06.932 [2024-11-06 08:07:29.328792] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:06.932 [2024-11-06 08:07:29.328801] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:06.932 [2024-11-06 08:07:29.328817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.932 [2024-11-06 08:07:29.328827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:06.932 [2024-11-06 08:07:29.328840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:31:06.932 [2024-11-06 08:07:29.328849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.932 [2024-11-06 08:07:29.328927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.932 [2024-11-06 08:07:29.328941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:06.932 [2024-11-06 08:07:29.328955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:06.932 [2024-11-06 08:07:29.328975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.932 [2024-11-06 08:07:29.329069] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:06.932 [2024-11-06 08:07:29.329086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:06.932 [2024-11-06 08:07:29.329130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:06.932 [2024-11-06 08:07:29.329142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:06.932 [2024-11-06 08:07:29.329165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:06.932 [2024-11-06 08:07:29.329187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:06.932 [2024-11-06 08:07:29.329200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:06.932 [2024-11-06 08:07:29.329210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:06.932 [2024-11-06 08:07:29.329232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:06.932 [2024-11-06 08:07:29.329245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:06.932 [2024-11-06 08:07:29.329284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:06.932 [2024-11-06 08:07:29.329297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:06.932 [2024-11-06 08:07:29.329321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:06.932 [2024-11-06 08:07:29.329332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:06.932 [2024-11-06 08:07:29.329355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:06.932 [2024-11-06 08:07:29.329365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.932 [2024-11-06 08:07:29.329376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:06.932 [2024-11-06 08:07:29.329385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:06.932 [2024-11-06 08:07:29.329397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.932 [2024-11-06 08:07:29.329406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:06.932 [2024-11-06 08:07:29.329417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:06.932 [2024-11-06 08:07:29.329427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.932 [2024-11-06 08:07:29.329438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:06.932 [2024-11-06 08:07:29.329447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:06.932 [2024-11-06 08:07:29.329459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.932 [2024-11-06 08:07:29.329468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:06.932 [2024-11-06 08:07:29.329482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:06.932 [2024-11-06 08:07:29.329491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:06.932 [2024-11-06 08:07:29.329511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:06.932 [2024-11-06 08:07:29.329523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:06.932 [2024-11-06 08:07:29.329543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:06.932 [2024-11-06 08:07:29.329588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:06.932 [2024-11-06 08:07:29.329599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329609] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:06.932 [2024-11-06 08:07:29.329621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:06.932 [2024-11-06 08:07:29.329631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:06.932 [2024-11-06 08:07:29.329643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.932 [2024-11-06 08:07:29.329653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:06.932 [2024-11-06 08:07:29.329668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:06.932 [2024-11-06 08:07:29.329677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:06.932 [2024-11-06 08:07:29.329688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:06.932 [2024-11-06 08:07:29.329698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:06.932 [2024-11-06 08:07:29.329709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:06.932 [2024-11-06 08:07:29.329723] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:06.932 [2024-11-06 08:07:29.329738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:06.932 [2024-11-06 08:07:29.329762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:06.932 [2024-11-06 08:07:29.329797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:06.932 [2024-11-06 08:07:29.329808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:06.932 [2024-11-06 08:07:29.329818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:06.932 [2024-11-06 08:07:29.329830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:06.932 [2024-11-06 08:07:29.329922] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:06.932 [2024-11-06 08:07:29.329940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:06.932 [2024-11-06 08:07:29.329964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:06.932 [2024-11-06 08:07:29.329975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:06.932 [2024-11-06 08:07:29.329987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:06.932 [2024-11-06 08:07:29.330006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.932 [2024-11-06 08:07:29.330020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:06.932 [2024-11-06 08:07:29.330031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.995 ms 00:31:06.932 [2024-11-06 08:07:29.330044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.932 [2024-11-06 08:07:29.330096] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:06.932 [2024-11-06 08:07:29.330116] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:10.221 [2024-11-06 08:07:32.419749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.221 [2024-11-06 08:07:32.419838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:10.221 [2024-11-06 08:07:32.419862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3089.664 ms 00:31:10.221 [2024-11-06 08:07:32.419879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.221 [2024-11-06 08:07:32.461259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.221 [2024-11-06 08:07:32.461346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:10.221 [2024-11-06 08:07:32.461372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.023 ms 00:31:10.221 [2024-11-06 08:07:32.461391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.221 [2024-11-06 08:07:32.461559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.221 [2024-11-06 08:07:32.461621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:10.221 [2024-11-06 08:07:32.461638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:10.221 [2024-11-06 08:07:32.461657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.221 [2024-11-06 08:07:32.504415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.221 [2024-11-06 08:07:32.504476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:10.221 [2024-11-06 08:07:32.504496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.695 ms 00:31:10.221 [2024-11-06 08:07:32.504513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.221 [2024-11-06 08:07:32.504591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.221 [2024-11-06 08:07:32.504631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:10.221 [2024-11-06 08:07:32.504648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:10.221 [2024-11-06 08:07:32.504669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.221 [2024-11-06 08:07:32.505582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.505621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:10.222 [2024-11-06 08:07:32.505639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.792 ms 00:31:10.222 [2024-11-06 08:07:32.505654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.505762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.505784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:10.222 [2024-11-06 08:07:32.505798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:10.222 [2024-11-06 08:07:32.505817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.527110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.527159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:10.222 [2024-11-06 08:07:32.527177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.261 ms 00:31:10.222 [2024-11-06 08:07:32.527197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.552654] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:10.222 [2024-11-06 08:07:32.554477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.554513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:10.222 [2024-11-06 08:07:32.554536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.116 ms 00:31:10.222 [2024-11-06 08:07:32.554550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.581898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.581941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:31:10.222 [2024-11-06 08:07:32.581964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.304 ms 00:31:10.222 [2024-11-06 08:07:32.581979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.582164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.582204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:10.222 [2024-11-06 08:07:32.582229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:10.222 [2024-11-06 08:07:32.582266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.607120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.607166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:31:10.222 [2024-11-06 08:07:32.607190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.778 ms 00:31:10.222 [2024-11-06 08:07:32.607204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.631548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.631602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:31:10.222 [2024-11-06 08:07:32.631627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.226 ms 00:31:10.222 [2024-11-06 08:07:32.631640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.632395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.632433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:10.222 [2024-11-06 08:07:32.632454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.647 ms 00:31:10.222 [2024-11-06 08:07:32.632468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.710722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.710763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:31:10.222 [2024-11-06 08:07:32.710788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 78.167 ms 00:31:10.222 [2024-11-06 08:07:32.710802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.737747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.737801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:31:10.222 [2024-11-06 08:07:32.737843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.770 ms 00:31:10.222 [2024-11-06 08:07:32.737858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.762473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.762515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:31:10.222 [2024-11-06 08:07:32.762537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.494 ms 00:31:10.222 [2024-11-06 08:07:32.762550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.787169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.787211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:10.222 [2024-11-06 08:07:32.787234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.566 ms 00:31:10.222 [2024-11-06 08:07:32.787260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.787325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.787345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:10.222 [2024-11-06 08:07:32.787382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:10.222 [2024-11-06 08:07:32.787395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.787510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:10.222 [2024-11-06 08:07:32.787530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:10.222 [2024-11-06 08:07:32.787548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:10.222 [2024-11-06 08:07:32.787572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:10.222 [2024-11-06 08:07:32.789117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3476.972 ms, result 0 00:31:10.222 { 00:31:10.222 "name": "ftl", 00:31:10.222 "uuid": "0a382153-ee62-4328-afb1-a35c2c52f33e" 00:31:10.222 } 00:31:10.222 08:07:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:31:10.482 [2024-11-06 08:07:33.071860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.482 08:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:11.050 08:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:11.050 [2024-11-06 08:07:33.640392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:11.050 08:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:11.309 [2024-11-06 08:07:33.842102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:11.309 08:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:11.876 Fill FTL, iteration 1 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:11.876 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81574 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81574 /var/tmp/spdk.tgt.sock 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81574 ']' 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:11.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:11.877 08:07:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:11.877 [2024-11-06 08:07:34.354288] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:31:11.877 [2024-11-06 08:07:34.354429] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81574 ] 00:31:12.136 [2024-11-06 08:07:34.529933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.136 [2024-11-06 08:07:34.673208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:13.073 08:07:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:13.073 08:07:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:13.073 08:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:13.334 ftln1 00:31:13.334 08:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:13.334 08:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:13.593 08:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:13.594 08:07:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81574 00:31:13.594 08:07:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81574 ']' 00:31:13.594 08:07:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81574 00:31:13.594 08:07:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:31:13.594 08:07:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.594 08:07:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81574 00:31:13.594 killing process with pid 81574 00:31:13.594 08:07:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:13.594 08:07:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:13.594 08:07:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81574' 00:31:13.594 08:07:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81574 00:31:13.594 08:07:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81574 00:31:15.498 08:07:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:15.498 08:07:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:15.498 [2024-11-06 08:07:37.824529] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:31:15.498 [2024-11-06 08:07:37.824656] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81620 ] 00:31:15.498 [2024-11-06 08:07:37.985711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.498 [2024-11-06 08:07:38.083864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.877  [2024-11-06T08:07:40.884Z] Copying: 231/1024 [MB] (231 MBps) [2024-11-06T08:07:41.820Z] Copying: 458/1024 [MB] (227 MBps) [2024-11-06T08:07:42.761Z] Copying: 687/1024 [MB] (229 MBps) [2024-11-06T08:07:43.075Z] Copying: 917/1024 [MB] (230 MBps) [2024-11-06T08:07:44.011Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:31:21.382 00:31:21.382 Calculate MD5 checksum, iteration 1 00:31:21.382 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:21.382 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:21.382 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:21.382 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:21.382 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:21.382 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:21.382 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:21.382 08:07:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:21.382 [2024-11-06 08:07:43.932088] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:31:21.382 [2024-11-06 08:07:43.932215] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81680 ] 00:31:21.644 [2024-11-06 08:07:44.097592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.644 [2024-11-06 08:07:44.204471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.022  [2024-11-06T08:07:47.029Z] Copying: 486/1024 [MB] (486 MBps) [2024-11-06T08:07:47.030Z] Copying: 970/1024 [MB] (484 MBps) [2024-11-06T08:07:47.597Z] Copying: 1024/1024 [MB] (average 485 MBps) 00:31:24.968 00:31:24.968 08:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:24.968 08:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:26.873 Fill FTL, iteration 2 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=52203446d90af94bc34f9abe60630959 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:26.873 08:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:26.873 [2024-11-06 08:07:49.299732] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:31:26.873 [2024-11-06 08:07:49.299913] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81737 ] 00:31:26.873 [2024-11-06 08:07:49.478376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.138 [2024-11-06 08:07:49.605151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.515  [2024-11-06T08:07:52.079Z] Copying: 231/1024 [MB] (231 MBps) [2024-11-06T08:07:53.456Z] Copying: 466/1024 [MB] (235 MBps) [2024-11-06T08:07:54.393Z] Copying: 692/1024 [MB] (226 MBps) [2024-11-06T08:07:54.652Z] Copying: 920/1024 [MB] (228 MBps) [2024-11-06T08:07:55.590Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:31:32.961 00:31:32.961 Calculate MD5 checksum, iteration 2 00:31:32.961 08:07:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:32.961 08:07:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:32.961 08:07:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:32.961 08:07:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:32.961 08:07:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:32.961 08:07:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:32.961 08:07:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:32.961 08:07:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:32.961 [2024-11-06 08:07:55.516890] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:31:32.961 [2024-11-06 08:07:55.517886] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81801 ] 00:31:33.220 [2024-11-06 08:07:55.704487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.220 [2024-11-06 08:07:55.802804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.125  [2024-11-06T08:07:58.691Z] Copying: 483/1024 [MB] (483 MBps) [2024-11-06T08:07:58.691Z] Copying: 959/1024 [MB] (476 MBps) [2024-11-06T08:07:59.627Z] Copying: 1024/1024 [MB] (average 478 MBps) 00:31:36.998 00:31:36.998 08:07:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:36.998 08:07:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:38.900 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:38.900 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=24d7e21f142c01579cdd374d0f387b79 00:31:38.900 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:38.900 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:38.900 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:39.159 [2024-11-06 08:08:01.613848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.159 [2024-11-06 08:08:01.613951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:39.159 [2024-11-06 08:08:01.613974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:39.159 [2024-11-06 08:08:01.613987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.159 [2024-11-06 08:08:01.614022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.159 [2024-11-06 08:08:01.614041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:39.159 [2024-11-06 08:08:01.614056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:39.159 [2024-11-06 08:08:01.614068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.159 [2024-11-06 08:08:01.614106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.159 [2024-11-06 08:08:01.614122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:39.159 [2024-11-06 08:08:01.614136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:39.159 [2024-11-06 08:08:01.614149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.159 [2024-11-06 08:08:01.614231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.370 ms, result 0 00:31:39.159 true 00:31:39.159 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:39.417 { 00:31:39.417 "name": "ftl", 00:31:39.417 "properties": [ 00:31:39.417 { 00:31:39.417 "name": "superblock_version", 00:31:39.417 "value": 5, 00:31:39.417 "read-only": true 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "name": "base_device", 00:31:39.417 "bands": [ 00:31:39.417 { 00:31:39.417 "id": 0, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 1, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 2, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 3, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 4, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 5, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 6, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 7, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 8, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 9, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 10, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 11, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 12, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 13, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 14, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 15, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 16, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 17, 00:31:39.417 "state": "FREE", 00:31:39.417 "validity": 0.0 00:31:39.417 } 00:31:39.417 ], 00:31:39.417 "read-only": true 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "name": "cache_device", 00:31:39.417 "type": "bdev", 00:31:39.417 "chunks": [ 00:31:39.417 { 00:31:39.417 "id": 0, 00:31:39.417 "state": "INACTIVE", 00:31:39.417 "utilization": 0.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 1, 00:31:39.417 "state": "CLOSED", 00:31:39.417 "utilization": 1.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 2, 00:31:39.417 "state": "CLOSED", 00:31:39.417 "utilization": 1.0 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 3, 00:31:39.417 "state": "OPEN", 00:31:39.417 "utilization": 0.001953125 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "id": 4, 00:31:39.417 "state": "OPEN", 00:31:39.417 "utilization": 0.0 00:31:39.417 } 00:31:39.417 ], 00:31:39.417 "read-only": true 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "name": "verbose_mode", 00:31:39.417 "value": true, 00:31:39.417 "unit": "", 00:31:39.417 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:39.417 }, 00:31:39.417 { 00:31:39.417 "name": "prep_upgrade_on_shutdown", 00:31:39.417 "value": false, 00:31:39.417 "unit": "", 00:31:39.417 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:39.417 } 00:31:39.417 ] 00:31:39.417 } 00:31:39.417 08:08:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:39.675 [2024-11-06 08:08:02.052878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.675 [2024-11-06 08:08:02.052923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:39.675 [2024-11-06 08:08:02.052942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:39.675 [2024-11-06 08:08:02.052954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.675 [2024-11-06 08:08:02.052986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.675 [2024-11-06 08:08:02.053004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:39.675 [2024-11-06 08:08:02.053016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:39.675 [2024-11-06 08:08:02.053028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.675 [2024-11-06 08:08:02.053055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.675 [2024-11-06 08:08:02.053070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:39.675 [2024-11-06 08:08:02.053081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:39.675 [2024-11-06 08:08:02.053092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.675 [2024-11-06 08:08:02.053170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.276 ms, result 0 00:31:39.675 true 00:31:39.675 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:39.675 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:39.675 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:39.934 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:39.934 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:39.934 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:39.934 [2024-11-06 08:08:02.537361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.934 [2024-11-06 08:08:02.537405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:39.934 [2024-11-06 08:08:02.537422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:39.934 [2024-11-06 08:08:02.537433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.934 [2024-11-06 08:08:02.537467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.934 [2024-11-06 08:08:02.537485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:39.934 [2024-11-06 08:08:02.537496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:39.934 [2024-11-06 08:08:02.537507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.934 [2024-11-06 08:08:02.537535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:39.934 [2024-11-06 08:08:02.537550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:39.934 [2024-11-06 08:08:02.537562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:39.934 [2024-11-06 08:08:02.537573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:39.934 [2024-11-06 08:08:02.537636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.259 ms, result 0 00:31:39.934 true 00:31:39.934 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:40.526 { 00:31:40.526 "name": "ftl", 00:31:40.526 "properties": [ 00:31:40.526 { 00:31:40.526 "name": "superblock_version", 00:31:40.526 "value": 5, 00:31:40.526 "read-only": true 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "name": "base_device", 00:31:40.526 "bands": [ 00:31:40.526 { 00:31:40.526 "id": 0, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 1, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 2, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 3, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 4, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 5, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 6, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 7, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 8, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 9, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 10, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 11, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 12, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 13, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 14, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 15, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 16, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 17, 00:31:40.526 "state": "FREE", 00:31:40.526 "validity": 0.0 00:31:40.526 } 00:31:40.526 ], 00:31:40.526 "read-only": true 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "name": "cache_device", 00:31:40.526 "type": "bdev", 00:31:40.526 "chunks": [ 00:31:40.526 { 00:31:40.526 "id": 0, 00:31:40.526 "state": "INACTIVE", 00:31:40.526 "utilization": 0.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 1, 00:31:40.526 "state": "CLOSED", 00:31:40.526 "utilization": 1.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 2, 00:31:40.526 "state": "CLOSED", 00:31:40.526 "utilization": 1.0 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 3, 00:31:40.526 "state": "OPEN", 00:31:40.526 "utilization": 0.001953125 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "id": 4, 00:31:40.526 "state": "OPEN", 00:31:40.526 "utilization": 0.0 00:31:40.526 } 00:31:40.526 ], 00:31:40.526 "read-only": true 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "name": "verbose_mode", 00:31:40.526 "value": true, 00:31:40.526 "unit": "", 00:31:40.526 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:40.526 }, 00:31:40.526 { 00:31:40.526 "name": "prep_upgrade_on_shutdown", 00:31:40.526 "value": true, 00:31:40.526 "unit": "", 00:31:40.526 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:40.526 } 00:31:40.526 ] 00:31:40.526 } 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81446 ]] 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81446 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81446 ']' 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81446 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81446 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81446' 00:31:40.526 killing process with pid 81446 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81446 00:31:40.526 08:08:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81446 00:31:41.468 [2024-11-06 08:08:03.772115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:41.468 [2024-11-06 08:08:03.788816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.468 [2024-11-06 08:08:03.788867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:41.468 [2024-11-06 08:08:03.788892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:41.468 [2024-11-06 08:08:03.788905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:41.468 [2024-11-06 08:08:03.788956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:41.468 [2024-11-06 08:08:03.792398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:41.468 [2024-11-06 08:08:03.792435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:41.468 [2024-11-06 08:08:03.792452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.414 ms 00:31:41.468 [2024-11-06 08:08:03.792469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.866187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.866297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:49.588 [2024-11-06 08:08:10.866322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7073.717 ms 00:31:49.588 [2024-11-06 08:08:10.866335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.867323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.867359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:49.588 [2024-11-06 08:08:10.867387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.953 ms 00:31:49.588 [2024-11-06 08:08:10.867399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.868354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.868605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:49.588 [2024-11-06 08:08:10.868634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.907 ms 00:31:49.588 [2024-11-06 08:08:10.868649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.879556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.879749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:49.588 [2024-11-06 08:08:10.879779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.846 ms 00:31:49.588 [2024-11-06 08:08:10.879793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.886750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.886938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:49.588 [2024-11-06 08:08:10.886966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.905 ms 00:31:49.588 [2024-11-06 08:08:10.886980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.887086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.887112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:49.588 [2024-11-06 08:08:10.887127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:31:49.588 [2024-11-06 08:08:10.887149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.896921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.896961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:49.588 [2024-11-06 08:08:10.896978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.747 ms 00:31:49.588 [2024-11-06 08:08:10.896989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.906807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.906846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:49.588 [2024-11-06 08:08:10.906862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.777 ms 00:31:49.588 [2024-11-06 08:08:10.906873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.916505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.916689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:49.588 [2024-11-06 08:08:10.916716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.591 ms 00:31:49.588 [2024-11-06 08:08:10.916728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.926465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.926507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:49.588 [2024-11-06 08:08:10.926523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.635 ms 00:31:49.588 [2024-11-06 08:08:10.926534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.588 [2024-11-06 08:08:10.926575] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:49.588 [2024-11-06 08:08:10.926601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:49.588 [2024-11-06 08:08:10.926617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:49.588 [2024-11-06 08:08:10.926648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:49.588 [2024-11-06 08:08:10.926661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:49.588 [2024-11-06 08:08:10.926844] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:49.588 [2024-11-06 08:08:10.926857] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0a382153-ee62-4328-afb1-a35c2c52f33e 00:31:49.588 [2024-11-06 08:08:10.926869] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:49.588 [2024-11-06 08:08:10.926881] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:49.588 [2024-11-06 08:08:10.926893] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:49.588 [2024-11-06 08:08:10.926905] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:49.588 [2024-11-06 08:08:10.926927] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:49.588 [2024-11-06 08:08:10.926940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:49.588 [2024-11-06 08:08:10.926952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:49.588 [2024-11-06 08:08:10.926962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:49.588 [2024-11-06 08:08:10.926973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:49.588 [2024-11-06 08:08:10.926987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.588 [2024-11-06 08:08:10.927006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:49.588 [2024-11-06 08:08:10.927031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:31:49.589 [2024-11-06 08:08:10.927043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:10.941607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.589 [2024-11-06 08:08:10.941646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:49.589 [2024-11-06 08:08:10.941664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.539 ms 00:31:49.589 [2024-11-06 08:08:10.941676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:10.942139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:49.589 [2024-11-06 08:08:10.942159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:49.589 [2024-11-06 08:08:10.942174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.410 ms 00:31:49.589 [2024-11-06 08:08:10.942185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:10.990647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:10.990698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:49.589 [2024-11-06 08:08:10.990716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:10.990728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:10.990778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:10.990796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:49.589 [2024-11-06 08:08:10.990809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:10.990822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:10.990924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:10.990945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:49.589 [2024-11-06 08:08:10.990960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:10.990972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:10.991006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:10.991023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:49.589 [2024-11-06 08:08:10.991036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:10.991048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.080378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:11.080458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:49.589 [2024-11-06 08:08:11.080478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:11.080492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.153356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:11.153665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:49.589 [2024-11-06 08:08:11.153698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:11.153712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.153873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:11.153895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:49.589 [2024-11-06 08:08:11.153910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:11.153923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.153991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:11.154021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:49.589 [2024-11-06 08:08:11.154035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:11.154048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.154179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:11.154201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:49.589 [2024-11-06 08:08:11.154215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:11.154227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.154315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:11.154336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:49.589 [2024-11-06 08:08:11.154359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:11.154371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.154436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:11.154456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:49.589 [2024-11-06 08:08:11.154470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:11.154481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.154548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:49.589 [2024-11-06 08:08:11.154569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:49.589 [2024-11-06 08:08:11.154590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:49.589 [2024-11-06 08:08:11.154602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:49.589 [2024-11-06 08:08:11.154773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7365.956 ms, result 0 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81996 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81996 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81996 ']' 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.494 08:08:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:51.752 [2024-11-06 08:08:14.156632] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:31:51.752 [2024-11-06 08:08:14.157063] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81996 ] 00:31:51.752 [2024-11-06 08:08:14.324233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.011 [2024-11-06 08:08:14.438056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.947 [2024-11-06 08:08:15.265327] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:52.947 [2024-11-06 08:08:15.265403] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:52.947 [2024-11-06 08:08:15.411321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.947 [2024-11-06 08:08:15.411367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:52.947 [2024-11-06 08:08:15.411402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:52.947 [2024-11-06 08:08:15.411412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.947 [2024-11-06 08:08:15.411482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.947 [2024-11-06 08:08:15.411502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:52.947 [2024-11-06 08:08:15.411513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:52.947 [2024-11-06 08:08:15.411523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.947 [2024-11-06 08:08:15.411552] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:52.947 [2024-11-06 08:08:15.412512] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:52.947 [2024-11-06 08:08:15.412689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.947 [2024-11-06 08:08:15.412866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:52.947 [2024-11-06 08:08:15.412918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.141 ms 00:31:52.947 [2024-11-06 08:08:15.413092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.947 [2024-11-06 08:08:15.415114] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:52.948 [2024-11-06 08:08:15.429233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.429451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:52.948 [2024-11-06 08:08:15.429599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.121 ms 00:31:52.948 [2024-11-06 08:08:15.429623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.429710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.429731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:52.948 [2024-11-06 08:08:15.429743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:52.948 [2024-11-06 08:08:15.429753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.438494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.438540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:52.948 [2024-11-06 08:08:15.438571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.642 ms 00:31:52.948 [2024-11-06 08:08:15.438581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.438654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.438677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:52.948 [2024-11-06 08:08:15.438689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:31:52.948 [2024-11-06 08:08:15.438698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.438760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.438776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:52.948 [2024-11-06 08:08:15.438791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:52.948 [2024-11-06 08:08:15.438801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.438837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:52.948 [2024-11-06 08:08:15.443345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.443380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:52.948 [2024-11-06 08:08:15.443410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.516 ms 00:31:52.948 [2024-11-06 08:08:15.443420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.443531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.443545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:52.948 [2024-11-06 08:08:15.443555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:52.948 [2024-11-06 08:08:15.443565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.443627] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:52.948 [2024-11-06 08:08:15.443656] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:52.948 [2024-11-06 08:08:15.443696] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:52.948 [2024-11-06 08:08:15.443713] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:52.948 [2024-11-06 08:08:15.443807] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:52.948 [2024-11-06 08:08:15.443821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:52.948 [2024-11-06 08:08:15.443833] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:52.948 [2024-11-06 08:08:15.443846] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:52.948 [2024-11-06 08:08:15.443857] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:52.948 [2024-11-06 08:08:15.443868] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:52.948 [2024-11-06 08:08:15.443883] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:52.948 [2024-11-06 08:08:15.443892] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:52.948 [2024-11-06 08:08:15.443902] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:52.948 [2024-11-06 08:08:15.443913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.443923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:52.948 [2024-11-06 08:08:15.443933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:31:52.948 [2024-11-06 08:08:15.443943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.444028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.948 [2024-11-06 08:08:15.444041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:52.948 [2024-11-06 08:08:15.444051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:31:52.948 [2024-11-06 08:08:15.444066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.948 [2024-11-06 08:08:15.444164] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:52.948 [2024-11-06 08:08:15.444179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:52.948 [2024-11-06 08:08:15.444190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:52.948 [2024-11-06 08:08:15.444200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:52.948 [2024-11-06 08:08:15.444219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:52.948 [2024-11-06 08:08:15.444236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:52.948 [2024-11-06 08:08:15.444246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:52.948 [2024-11-06 08:08:15.444256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:52.948 [2024-11-06 08:08:15.444303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:52.948 [2024-11-06 08:08:15.444321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:52.948 [2024-11-06 08:08:15.444340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:52.948 [2024-11-06 08:08:15.444351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:52.948 [2024-11-06 08:08:15.444369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:52.948 [2024-11-06 08:08:15.444379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:52.948 [2024-11-06 08:08:15.444397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:52.948 [2024-11-06 08:08:15.444405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:52.948 [2024-11-06 08:08:15.444415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:52.948 [2024-11-06 08:08:15.444424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:52.948 [2024-11-06 08:08:15.444433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:52.948 [2024-11-06 08:08:15.444454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:52.948 [2024-11-06 08:08:15.444464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:52.948 [2024-11-06 08:08:15.444473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:52.948 [2024-11-06 08:08:15.444482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:52.948 [2024-11-06 08:08:15.444491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:52.948 [2024-11-06 08:08:15.444499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:52.948 [2024-11-06 08:08:15.444509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:52.948 [2024-11-06 08:08:15.444518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:52.948 [2024-11-06 08:08:15.444527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:52.948 [2024-11-06 08:08:15.444545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:52.948 [2024-11-06 08:08:15.444554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:52.948 [2024-11-06 08:08:15.444572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:52.948 [2024-11-06 08:08:15.444600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:52.948 [2024-11-06 08:08:15.444609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444618] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:52.948 [2024-11-06 08:08:15.444628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:52.948 [2024-11-06 08:08:15.444638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:52.948 [2024-11-06 08:08:15.444648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:52.948 [2024-11-06 08:08:15.444659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:52.948 [2024-11-06 08:08:15.444669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:52.948 [2024-11-06 08:08:15.444678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:52.948 [2024-11-06 08:08:15.444688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:52.948 [2024-11-06 08:08:15.444697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:52.948 [2024-11-06 08:08:15.444706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:52.948 [2024-11-06 08:08:15.444716] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:52.948 [2024-11-06 08:08:15.444733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:52.948 [2024-11-06 08:08:15.444745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:52.949 [2024-11-06 08:08:15.444755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:52.949 [2024-11-06 08:08:15.444785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:52.949 [2024-11-06 08:08:15.444795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:52.949 [2024-11-06 08:08:15.444804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:52.949 [2024-11-06 08:08:15.444814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:52.949 [2024-11-06 08:08:15.444882] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:52.949 [2024-11-06 08:08:15.444893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:52.949 [2024-11-06 08:08:15.444915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:52.949 [2024-11-06 08:08:15.444925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:52.949 [2024-11-06 08:08:15.444935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:52.949 [2024-11-06 08:08:15.444946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:52.949 [2024-11-06 08:08:15.444956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:52.949 [2024-11-06 08:08:15.444967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:31:52.949 [2024-11-06 08:08:15.444977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:52.949 [2024-11-06 08:08:15.445036] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:52.949 [2024-11-06 08:08:15.445053] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:56.238 [2024-11-06 08:08:18.385292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.385524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:56.238 [2024-11-06 08:08:18.385648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2940.268 ms 00:31:56.238 [2024-11-06 08:08:18.385766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.419175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.419407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:56.238 [2024-11-06 08:08:18.419531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.080 ms 00:31:56.238 [2024-11-06 08:08:18.419652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.419838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.419892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:56.238 [2024-11-06 08:08:18.420009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:56.238 [2024-11-06 08:08:18.420056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.457887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.458068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:56.238 [2024-11-06 08:08:18.458185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.646 ms 00:31:56.238 [2024-11-06 08:08:18.458378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.458468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.458584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:56.238 [2024-11-06 08:08:18.458633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:56.238 [2024-11-06 08:08:18.458745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.459523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.459699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:56.238 [2024-11-06 08:08:18.459803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.588 ms 00:31:56.238 [2024-11-06 08:08:18.459901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.460008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.460206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:56.238 [2024-11-06 08:08:18.460287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:31:56.238 [2024-11-06 08:08:18.460455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.478854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.479043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:56.238 [2024-11-06 08:08:18.479154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.313 ms 00:31:56.238 [2024-11-06 08:08:18.479274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.513584] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:56.238 [2024-11-06 08:08:18.513809] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:56.238 [2024-11-06 08:08:18.513949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.514058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:56.238 [2024-11-06 08:08:18.514106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.497 ms 00:31:56.238 [2024-11-06 08:08:18.514197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.528686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.528859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:56.238 [2024-11-06 08:08:18.529011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.302 ms 00:31:56.238 [2024-11-06 08:08:18.529059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.541702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.541873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:56.238 [2024-11-06 08:08:18.542026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.484 ms 00:31:56.238 [2024-11-06 08:08:18.542072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.554454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.554624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:56.238 [2024-11-06 08:08:18.554650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.230 ms 00:31:56.238 [2024-11-06 08:08:18.554661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.555521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.555562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:56.238 [2024-11-06 08:08:18.555577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.690 ms 00:31:56.238 [2024-11-06 08:08:18.555587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.619494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.619571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:56.238 [2024-11-06 08:08:18.619590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 63.874 ms 00:31:56.238 [2024-11-06 08:08:18.619600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.629526] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:56.238 [2024-11-06 08:08:18.630374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.630412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:56.238 [2024-11-06 08:08:18.630427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.691 ms 00:31:56.238 [2024-11-06 08:08:18.630437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.630532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.630550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:56.238 [2024-11-06 08:08:18.630566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:56.238 [2024-11-06 08:08:18.630575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.630664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.630683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:56.238 [2024-11-06 08:08:18.630694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:56.238 [2024-11-06 08:08:18.630704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.630736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.630750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:56.238 [2024-11-06 08:08:18.630761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:56.238 [2024-11-06 08:08:18.630776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.630812] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:56.238 [2024-11-06 08:08:18.630827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.630836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:56.238 [2024-11-06 08:08:18.630846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:56.238 [2024-11-06 08:08:18.630856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.655265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.655307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:56.238 [2024-11-06 08:08:18.655330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.382 ms 00:31:56.238 [2024-11-06 08:08:18.655341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.655424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.238 [2024-11-06 08:08:18.655441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:56.238 [2024-11-06 08:08:18.655452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:31:56.238 [2024-11-06 08:08:18.655462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.238 [2024-11-06 08:08:18.657042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3245.188 ms, result 0 00:31:56.238 [2024-11-06 08:08:18.671643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.238 [2024-11-06 08:08:18.687660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:56.238 [2024-11-06 08:08:18.695807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:56.238 08:08:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.238 08:08:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:56.238 08:08:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:56.238 08:08:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:56.238 08:08:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:56.497 [2024-11-06 08:08:19.039957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.497 [2024-11-06 08:08:19.040162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:56.497 [2024-11-06 08:08:19.040191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:56.497 [2024-11-06 08:08:19.040203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.497 [2024-11-06 08:08:19.040249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.497 [2024-11-06 08:08:19.040306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:56.497 [2024-11-06 08:08:19.040331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:56.497 [2024-11-06 08:08:19.040343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.497 [2024-11-06 08:08:19.040372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:56.497 [2024-11-06 08:08:19.040386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:56.497 [2024-11-06 08:08:19.040397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:56.497 [2024-11-06 08:08:19.040407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:56.497 [2024-11-06 08:08:19.040477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.506 ms, result 0 00:31:56.497 true 00:31:56.497 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:56.757 { 00:31:56.757 "name": "ftl", 00:31:56.757 "properties": [ 00:31:56.757 { 00:31:56.757 "name": "superblock_version", 00:31:56.757 "value": 5, 00:31:56.757 "read-only": true 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "name": "base_device", 00:31:56.757 "bands": [ 00:31:56.757 { 00:31:56.757 "id": 0, 00:31:56.757 "state": "CLOSED", 00:31:56.757 "validity": 1.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 1, 00:31:56.757 "state": "CLOSED", 00:31:56.757 "validity": 1.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 2, 00:31:56.757 "state": "CLOSED", 00:31:56.757 "validity": 0.007843137254901933 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 3, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 4, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 5, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 6, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 7, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 8, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 9, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 10, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 11, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 12, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 13, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 14, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 15, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 16, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 17, 00:31:56.757 "state": "FREE", 00:31:56.757 "validity": 0.0 00:31:56.757 } 00:31:56.757 ], 00:31:56.757 "read-only": true 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "name": "cache_device", 00:31:56.757 "type": "bdev", 00:31:56.757 "chunks": [ 00:31:56.757 { 00:31:56.757 "id": 0, 00:31:56.757 "state": "INACTIVE", 00:31:56.757 "utilization": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 1, 00:31:56.757 "state": "OPEN", 00:31:56.757 "utilization": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 2, 00:31:56.757 "state": "OPEN", 00:31:56.757 "utilization": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 3, 00:31:56.757 "state": "FREE", 00:31:56.757 "utilization": 0.0 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "id": 4, 00:31:56.757 "state": "FREE", 00:31:56.757 "utilization": 0.0 00:31:56.757 } 00:31:56.757 ], 00:31:56.757 "read-only": true 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "name": "verbose_mode", 00:31:56.757 "value": true, 00:31:56.757 "unit": "", 00:31:56.757 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:56.757 }, 00:31:56.757 { 00:31:56.757 "name": "prep_upgrade_on_shutdown", 00:31:56.757 "value": false, 00:31:56.757 "unit": "", 00:31:56.757 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:56.757 } 00:31:56.757 ] 00:31:56.757 } 00:31:56.758 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:56.758 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:56.758 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:57.016 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:57.016 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:57.016 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:57.016 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:57.016 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:57.275 Validate MD5 checksum, iteration 1 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:57.275 08:08:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:57.534 [2024-11-06 08:08:19.927347] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:31:57.534 [2024-11-06 08:08:19.927781] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82070 ] 00:31:57.534 [2024-11-06 08:08:20.112796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.793 [2024-11-06 08:08:20.245062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.698  [2024-11-06T08:08:22.895Z] Copying: 465/1024 [MB] (465 MBps) [2024-11-06T08:08:23.154Z] Copying: 921/1024 [MB] (456 MBps) [2024-11-06T08:08:24.526Z] Copying: 1024/1024 [MB] (average 457 MBps) 00:32:01.897 00:32:01.897 08:08:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:01.897 08:08:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:03.799 Validate MD5 checksum, iteration 2 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=52203446d90af94bc34f9abe60630959 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 52203446d90af94bc34f9abe60630959 != \5\2\2\0\3\4\4\6\d\9\0\a\f\9\4\b\c\3\4\f\9\a\b\e\6\0\6\3\0\9\5\9 ]] 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:03.799 08:08:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:03.799 [2024-11-06 08:08:26.246801] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:32:03.799 [2024-11-06 08:08:26.247159] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82137 ] 00:32:03.799 [2024-11-06 08:08:26.424576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.058 [2024-11-06 08:08:26.580534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.962  [2024-11-06T08:08:29.548Z] Copying: 469/1024 [MB] (469 MBps) [2024-11-06T08:08:29.548Z] Copying: 921/1024 [MB] (452 MBps) [2024-11-06T08:08:30.922Z] Copying: 1024/1024 [MB] (average 457 MBps) 00:32:08.293 00:32:08.293 08:08:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:08.293 08:08:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=24d7e21f142c01579cdd374d0f387b79 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 24d7e21f142c01579cdd374d0f387b79 != \2\4\d\7\e\2\1\f\1\4\2\c\0\1\5\7\9\c\d\d\3\7\4\d\0\f\3\8\7\b\7\9 ]] 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81996 ]] 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81996 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82200 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:10.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82200 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82200 ']' 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:10.195 08:08:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:10.195 [2024-11-06 08:08:32.522336] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:32:10.195 [2024-11-06 08:08:32.523323] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82200 ] 00:32:10.195 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81996 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:10.195 [2024-11-06 08:08:32.691636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.195 [2024-11-06 08:08:32.803214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.131 [2024-11-06 08:08:33.718269] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:11.131 [2024-11-06 08:08:33.718364] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:11.391 [2024-11-06 08:08:33.866433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.866479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:11.391 [2024-11-06 08:08:33.866502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:11.391 [2024-11-06 08:08:33.866514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.866595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.866618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:11.391 [2024-11-06 08:08:33.866631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:32:11.391 [2024-11-06 08:08:33.866644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.866679] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:11.391 [2024-11-06 08:08:33.867446] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:11.391 [2024-11-06 08:08:33.867487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.867501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:11.391 [2024-11-06 08:08:33.867514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.815 ms 00:32:11.391 [2024-11-06 08:08:33.867527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.868062] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:11.391 [2024-11-06 08:08:33.887322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.887366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:11.391 [2024-11-06 08:08:33.887385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.261 ms 00:32:11.391 [2024-11-06 08:08:33.887397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.896687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.896729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:11.391 [2024-11-06 08:08:33.896752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:32:11.391 [2024-11-06 08:08:33.896764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.897310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.897343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:11.391 [2024-11-06 08:08:33.897360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.441 ms 00:32:11.391 [2024-11-06 08:08:33.897373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.897466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.897491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:11.391 [2024-11-06 08:08:33.897504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:32:11.391 [2024-11-06 08:08:33.897515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.897557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.897574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:11.391 [2024-11-06 08:08:33.897587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:11.391 [2024-11-06 08:08:33.897598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.897631] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:11.391 [2024-11-06 08:08:33.900732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.900772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:11.391 [2024-11-06 08:08:33.900788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.108 ms 00:32:11.391 [2024-11-06 08:08:33.900801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.900834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.900858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:11.391 [2024-11-06 08:08:33.900871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:11.391 [2024-11-06 08:08:33.900883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.900933] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:11.391 [2024-11-06 08:08:33.900966] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:11.391 [2024-11-06 08:08:33.901006] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:11.391 [2024-11-06 08:08:33.901027] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:11.391 [2024-11-06 08:08:33.901156] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:11.391 [2024-11-06 08:08:33.901179] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:11.391 [2024-11-06 08:08:33.901196] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:11.391 [2024-11-06 08:08:33.901211] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:11.391 [2024-11-06 08:08:33.901225] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:11.391 [2024-11-06 08:08:33.901237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:11.391 [2024-11-06 08:08:33.901264] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:11.391 [2024-11-06 08:08:33.901280] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:11.391 [2024-11-06 08:08:33.901293] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:11.391 [2024-11-06 08:08:33.901305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.901325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:11.391 [2024-11-06 08:08:33.901337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:32:11.391 [2024-11-06 08:08:33.901350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.901453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.391 [2024-11-06 08:08:33.901471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:11.391 [2024-11-06 08:08:33.901483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:32:11.391 [2024-11-06 08:08:33.901494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.391 [2024-11-06 08:08:33.901589] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:11.391 [2024-11-06 08:08:33.901608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:11.391 [2024-11-06 08:08:33.901621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:11.391 [2024-11-06 08:08:33.901640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.391 [2024-11-06 08:08:33.901652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:11.391 [2024-11-06 08:08:33.901663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:11.391 [2024-11-06 08:08:33.901674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:11.391 [2024-11-06 08:08:33.901685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:11.391 [2024-11-06 08:08:33.901697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:11.391 [2024-11-06 08:08:33.901708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.391 [2024-11-06 08:08:33.901719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:11.391 [2024-11-06 08:08:33.901729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:11.391 [2024-11-06 08:08:33.901740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.391 [2024-11-06 08:08:33.901751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:11.391 [2024-11-06 08:08:33.901762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:11.391 [2024-11-06 08:08:33.901773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.391 [2024-11-06 08:08:33.901783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:11.391 [2024-11-06 08:08:33.901794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:11.391 [2024-11-06 08:08:33.901805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.391 [2024-11-06 08:08:33.901815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:11.391 [2024-11-06 08:08:33.901826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:11.391 [2024-11-06 08:08:33.901837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:11.391 [2024-11-06 08:08:33.901847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:11.391 [2024-11-06 08:08:33.901872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:11.391 [2024-11-06 08:08:33.901883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:11.391 [2024-11-06 08:08:33.901893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:11.392 [2024-11-06 08:08:33.901905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:11.392 [2024-11-06 08:08:33.901915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:11.392 [2024-11-06 08:08:33.901928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:11.392 [2024-11-06 08:08:33.901938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:11.392 [2024-11-06 08:08:33.901949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:11.392 [2024-11-06 08:08:33.901959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:11.392 [2024-11-06 08:08:33.901970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:11.392 [2024-11-06 08:08:33.901980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.392 [2024-11-06 08:08:33.901991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:11.392 [2024-11-06 08:08:33.902001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:11.392 [2024-11-06 08:08:33.902011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.392 [2024-11-06 08:08:33.902024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:11.392 [2024-11-06 08:08:33.902035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:11.392 [2024-11-06 08:08:33.902045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.392 [2024-11-06 08:08:33.902056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:11.392 [2024-11-06 08:08:33.902066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:11.392 [2024-11-06 08:08:33.902077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.392 [2024-11-06 08:08:33.902088] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:11.392 [2024-11-06 08:08:33.902101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:11.392 [2024-11-06 08:08:33.902113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:11.392 [2024-11-06 08:08:33.902124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:11.392 [2024-11-06 08:08:33.902135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:11.392 [2024-11-06 08:08:33.902146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:11.392 [2024-11-06 08:08:33.902157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:11.392 [2024-11-06 08:08:33.902168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:11.392 [2024-11-06 08:08:33.902178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:11.392 [2024-11-06 08:08:33.902189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:11.392 [2024-11-06 08:08:33.902201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:11.392 [2024-11-06 08:08:33.902215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:11.392 [2024-11-06 08:08:33.902240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:11.392 [2024-11-06 08:08:33.902325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:11.392 [2024-11-06 08:08:33.902363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:11.392 [2024-11-06 08:08:33.902377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:11.392 [2024-11-06 08:08:33.902390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:11.392 [2024-11-06 08:08:33.902486] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:11.392 [2024-11-06 08:08:33.902499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:11.392 [2024-11-06 08:08:33.902524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:11.392 [2024-11-06 08:08:33.902536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:11.392 [2024-11-06 08:08:33.902547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:11.392 [2024-11-06 08:08:33.902561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:33.902581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:11.392 [2024-11-06 08:08:33.902594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.025 ms 00:32:11.392 [2024-11-06 08:08:33.902606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.392 [2024-11-06 08:08:33.939359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:33.939424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:11.392 [2024-11-06 08:08:33.939444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.668 ms 00:32:11.392 [2024-11-06 08:08:33.939457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.392 [2024-11-06 08:08:33.939530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:33.939548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:11.392 [2024-11-06 08:08:33.939562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:11.392 [2024-11-06 08:08:33.939573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.392 [2024-11-06 08:08:33.981311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:33.981594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:11.392 [2024-11-06 08:08:33.981624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.647 ms 00:32:11.392 [2024-11-06 08:08:33.981639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.392 [2024-11-06 08:08:33.981708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:33.981729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:11.392 [2024-11-06 08:08:33.981744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:11.392 [2024-11-06 08:08:33.981755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.392 [2024-11-06 08:08:33.981948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:33.981968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:11.392 [2024-11-06 08:08:33.981983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:32:11.392 [2024-11-06 08:08:33.981996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.392 [2024-11-06 08:08:33.982061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:33.982079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:11.392 [2024-11-06 08:08:33.982093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:32:11.392 [2024-11-06 08:08:33.982105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.392 [2024-11-06 08:08:34.003647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:34.003691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:11.392 [2024-11-06 08:08:34.003710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.507 ms 00:32:11.392 [2024-11-06 08:08:34.003723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.392 [2024-11-06 08:08:34.003879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.392 [2024-11-06 08:08:34.003904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:11.392 [2024-11-06 08:08:34.003919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:11.392 [2024-11-06 08:08:34.003932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.652 [2024-11-06 08:08:34.038029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.652 [2024-11-06 08:08:34.038075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:11.652 [2024-11-06 08:08:34.038095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.063 ms 00:32:11.652 [2024-11-06 08:08:34.038108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.652 [2024-11-06 08:08:34.047910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.652 [2024-11-06 08:08:34.048068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:11.652 [2024-11-06 08:08:34.048097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:32:11.652 [2024-11-06 08:08:34.048124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.652 [2024-11-06 08:08:34.118344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.652 [2024-11-06 08:08:34.118453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:11.652 [2024-11-06 08:08:34.118487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 70.141 ms 00:32:11.652 [2024-11-06 08:08:34.118502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.652 [2024-11-06 08:08:34.118767] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:11.652 [2024-11-06 08:08:34.118943] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:11.652 [2024-11-06 08:08:34.119098] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:11.652 [2024-11-06 08:08:34.119265] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:11.652 [2024-11-06 08:08:34.119292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.652 [2024-11-06 08:08:34.119306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:11.652 [2024-11-06 08:08:34.119320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.718 ms 00:32:11.652 [2024-11-06 08:08:34.119333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.652 [2024-11-06 08:08:34.119456] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:11.652 [2024-11-06 08:08:34.119481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.652 [2024-11-06 08:08:34.119494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:11.652 [2024-11-06 08:08:34.119515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:32:11.652 [2024-11-06 08:08:34.119527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.652 [2024-11-06 08:08:34.135397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.652 [2024-11-06 08:08:34.135443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:11.652 [2024-11-06 08:08:34.135469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.827 ms 00:32:11.652 [2024-11-06 08:08:34.135482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.652 [2024-11-06 08:08:34.144735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.652 [2024-11-06 08:08:34.144778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:11.652 [2024-11-06 08:08:34.144795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:11.652 [2024-11-06 08:08:34.144807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:11.652 [2024-11-06 08:08:34.144936] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:11.652 [2024-11-06 08:08:34.145329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:11.652 [2024-11-06 08:08:34.145369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:11.652 [2024-11-06 08:08:34.145387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.386 ms 00:32:11.652 [2024-11-06 08:08:34.145401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.219 [2024-11-06 08:08:34.673127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.219 [2024-11-06 08:08:34.673425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:12.219 [2024-11-06 08:08:34.673459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 526.801 ms 00:32:12.219 [2024-11-06 08:08:34.673474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.219 [2024-11-06 08:08:34.677625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.219 [2024-11-06 08:08:34.677692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:12.219 [2024-11-06 08:08:34.677711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.013 ms 00:32:12.219 [2024-11-06 08:08:34.677724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.219 [2024-11-06 08:08:34.678284] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:12.219 [2024-11-06 08:08:34.678322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.219 [2024-11-06 08:08:34.678336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:12.219 [2024-11-06 08:08:34.678349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:32:12.220 [2024-11-06 08:08:34.678360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.220 [2024-11-06 08:08:34.678467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.220 [2024-11-06 08:08:34.678493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:12.220 [2024-11-06 08:08:34.678507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:12.220 [2024-11-06 08:08:34.678519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.220 [2024-11-06 08:08:34.678576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 533.653 ms, result 0 00:32:12.220 [2024-11-06 08:08:34.678629] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:12.220 [2024-11-06 08:08:34.678906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.220 [2024-11-06 08:08:34.678930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:12.220 [2024-11-06 08:08:34.678943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:32:12.220 [2024-11-06 08:08:34.678953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.788 [2024-11-06 08:08:35.211643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.788 [2024-11-06 08:08:35.211690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:12.788 [2024-11-06 08:08:35.211720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 531.810 ms 00:32:12.788 [2024-11-06 08:08:35.211731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.788 [2024-11-06 08:08:35.215989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.788 [2024-11-06 08:08:35.216032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:12.788 [2024-11-06 08:08:35.216050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.985 ms 00:32:12.788 [2024-11-06 08:08:35.216061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.788 [2024-11-06 08:08:35.216678] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:12.788 [2024-11-06 08:08:35.216723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.788 [2024-11-06 08:08:35.216736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:12.788 [2024-11-06 08:08:35.216749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:32:12.788 [2024-11-06 08:08:35.216761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.788 [2024-11-06 08:08:35.216821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.788 [2024-11-06 08:08:35.216839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:12.788 [2024-11-06 08:08:35.216851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:12.788 [2024-11-06 08:08:35.216861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.788 [2024-11-06 08:08:35.216909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 538.275 ms, result 0 00:32:12.788 [2024-11-06 08:08:35.216960] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:12.788 [2024-11-06 08:08:35.216978] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:12.789 [2024-11-06 08:08:35.216992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.217003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:12.789 [2024-11-06 08:08:35.217015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1072.098 ms 00:32:12.789 [2024-11-06 08:08:35.217026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.217067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.217084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:12.789 [2024-11-06 08:08:35.217144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:12.789 [2024-11-06 08:08:35.217158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.227380] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:12.789 [2024-11-06 08:08:35.227538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.227558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:12.789 [2024-11-06 08:08:35.227573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.354 ms 00:32:12.789 [2024-11-06 08:08:35.227585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.228209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.228239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:12.789 [2024-11-06 08:08:35.228280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.517 ms 00:32:12.789 [2024-11-06 08:08:35.228314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.230229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.230283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:12.789 [2024-11-06 08:08:35.230308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.880 ms 00:32:12.789 [2024-11-06 08:08:35.230322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.230391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.230416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:12.789 [2024-11-06 08:08:35.230429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:32:12.789 [2024-11-06 08:08:35.230440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.230573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.230594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:12.789 [2024-11-06 08:08:35.230607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:32:12.789 [2024-11-06 08:08:35.230618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.230648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.230663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:12.789 [2024-11-06 08:08:35.230676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:32:12.789 [2024-11-06 08:08:35.230687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.230732] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:12.789 [2024-11-06 08:08:35.230757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.230768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:12.789 [2024-11-06 08:08:35.230780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:32:12.789 [2024-11-06 08:08:35.230792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.230861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:12.789 [2024-11-06 08:08:35.230878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:12.789 [2024-11-06 08:08:35.230890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:32:12.789 [2024-11-06 08:08:35.230901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:12.789 [2024-11-06 08:08:35.232379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1365.371 ms, result 0 00:32:12.789 [2024-11-06 08:08:35.247957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.789 [2024-11-06 08:08:35.263975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:12.789 [2024-11-06 08:08:35.273603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:12.789 Validate MD5 checksum, iteration 1 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:12.789 08:08:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:13.048 [2024-11-06 08:08:35.423487] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:32:13.048 [2024-11-06 08:08:35.423964] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82239 ] 00:32:13.048 [2024-11-06 08:08:35.622006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.307 [2024-11-06 08:08:35.788353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.210  [2024-11-06T08:08:38.407Z] Copying: 531/1024 [MB] (531 MBps) [2024-11-06T08:08:39.783Z] Copying: 1024/1024 [MB] (average 524 MBps) 00:32:17.154 00:32:17.154 08:08:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:17.154 08:08:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:19.057 Validate MD5 checksum, iteration 2 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=52203446d90af94bc34f9abe60630959 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 52203446d90af94bc34f9abe60630959 != \5\2\2\0\3\4\4\6\d\9\0\a\f\9\4\b\c\3\4\f\9\a\b\e\6\0\6\3\0\9\5\9 ]] 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:19.057 08:08:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:19.057 [2024-11-06 08:08:41.628432] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:32:19.057 [2024-11-06 08:08:41.628611] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82303 ] 00:32:19.315 [2024-11-06 08:08:41.818481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.574 [2024-11-06 08:08:41.976902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.478  [2024-11-06T08:08:44.674Z] Copying: 526/1024 [MB] (526 MBps) [2024-11-06T08:08:47.202Z] Copying: 1024/1024 [MB] (average 531 MBps) 00:32:24.573 00:32:24.573 08:08:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:24.573 08:08:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=24d7e21f142c01579cdd374d0f387b79 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 24d7e21f142c01579cdd374d0f387b79 != \2\4\d\7\e\2\1\f\1\4\2\c\0\1\5\7\9\c\d\d\3\7\4\d\0\f\3\8\7\b\7\9 ]] 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:26.478 08:08:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82200 ]] 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82200 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82200 ']' 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 82200 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82200 00:32:26.478 killing process with pid 82200 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82200' 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 82200 00:32:26.478 08:08:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 82200 00:32:27.416 [2024-11-06 08:08:49.859984] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:27.416 [2024-11-06 08:08:49.875769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.875810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:27.416 [2024-11-06 08:08:49.875827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:27.416 [2024-11-06 08:08:49.875839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.875866] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:27.416 [2024-11-06 08:08:49.879735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.879766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:27.416 [2024-11-06 08:08:49.879794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.849 ms 00:32:27.416 [2024-11-06 08:08:49.879809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.880059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.880077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:27.416 [2024-11-06 08:08:49.880089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:32:27.416 [2024-11-06 08:08:49.880099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.881412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.881447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:27.416 [2024-11-06 08:08:49.881462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.293 ms 00:32:27.416 [2024-11-06 08:08:49.881473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.882643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.882666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:27.416 [2024-11-06 08:08:49.882680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.107 ms 00:32:27.416 [2024-11-06 08:08:49.882689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.894803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.894842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:27.416 [2024-11-06 08:08:49.894857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.058 ms 00:32:27.416 [2024-11-06 08:08:49.894867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.901397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.901735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:27.416 [2024-11-06 08:08:49.901763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.484 ms 00:32:27.416 [2024-11-06 08:08:49.901776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.901892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.901911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:27.416 [2024-11-06 08:08:49.901924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:27.416 [2024-11-06 08:08:49.901935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.912282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.912317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:27.416 [2024-11-06 08:08:49.912331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.319 ms 00:32:27.416 [2024-11-06 08:08:49.912340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.922868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.922905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:27.416 [2024-11-06 08:08:49.922918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.493 ms 00:32:27.416 [2024-11-06 08:08:49.922927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.932905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.932944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:27.416 [2024-11-06 08:08:49.932958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.944 ms 00:32:27.416 [2024-11-06 08:08:49.932967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.943095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.943291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:27.416 [2024-11-06 08:08:49.943416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.069 ms 00:32:27.416 [2024-11-06 08:08:49.943464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.943601] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:27.416 [2024-11-06 08:08:49.943630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:27.416 [2024-11-06 08:08:49.943644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:27.416 [2024-11-06 08:08:49.943656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:27.416 [2024-11-06 08:08:49.943667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:27.416 [2024-11-06 08:08:49.943837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:27.416 [2024-11-06 08:08:49.943862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 0a382153-ee62-4328-afb1-a35c2c52f33e 00:32:27.416 [2024-11-06 08:08:49.943874] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:27.416 [2024-11-06 08:08:49.943884] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:27.416 [2024-11-06 08:08:49.943893] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:27.416 [2024-11-06 08:08:49.943904] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:27.416 [2024-11-06 08:08:49.943914] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:27.416 [2024-11-06 08:08:49.943925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:27.416 [2024-11-06 08:08:49.943935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:27.416 [2024-11-06 08:08:49.943944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:27.416 [2024-11-06 08:08:49.943954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:27.416 [2024-11-06 08:08:49.943963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.943974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:27.416 [2024-11-06 08:08:49.943992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:32:27.416 [2024-11-06 08:08:49.944002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.958107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.958146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:27.416 [2024-11-06 08:08:49.958161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.081 ms 00:32:27.416 [2024-11-06 08:08:49.958171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:49.958582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:27.416 [2024-11-06 08:08:49.958599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:27.416 [2024-11-06 08:08:49.958611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.388 ms 00:32:27.416 [2024-11-06 08:08:49.958636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:50.004945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.416 [2024-11-06 08:08:50.004986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:27.416 [2024-11-06 08:08:50.005001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.416 [2024-11-06 08:08:50.005012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:50.005053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.416 [2024-11-06 08:08:50.005066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:27.416 [2024-11-06 08:08:50.005077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.416 [2024-11-06 08:08:50.005086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.416 [2024-11-06 08:08:50.005176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.416 [2024-11-06 08:08:50.005194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:27.417 [2024-11-06 08:08:50.005205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.417 [2024-11-06 08:08:50.005216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.417 [2024-11-06 08:08:50.005237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.417 [2024-11-06 08:08:50.005280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:27.417 [2024-11-06 08:08:50.005293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.417 [2024-11-06 08:08:50.005303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.091333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.676 [2024-11-06 08:08:50.091558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:27.676 [2024-11-06 08:08:50.091587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.676 [2024-11-06 08:08:50.091600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.160076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.676 [2024-11-06 08:08:50.160134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:27.676 [2024-11-06 08:08:50.160151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.676 [2024-11-06 08:08:50.160162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.160297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.676 [2024-11-06 08:08:50.160324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:27.676 [2024-11-06 08:08:50.160336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.676 [2024-11-06 08:08:50.160347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.160401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.676 [2024-11-06 08:08:50.160416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:27.676 [2024-11-06 08:08:50.160427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.676 [2024-11-06 08:08:50.160453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.160621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.676 [2024-11-06 08:08:50.160645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:27.676 [2024-11-06 08:08:50.160658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.676 [2024-11-06 08:08:50.160669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.160723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.676 [2024-11-06 08:08:50.160741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:27.676 [2024-11-06 08:08:50.160753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.676 [2024-11-06 08:08:50.160770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.160847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.676 [2024-11-06 08:08:50.160862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:27.676 [2024-11-06 08:08:50.160875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.676 [2024-11-06 08:08:50.160886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.160937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:27.676 [2024-11-06 08:08:50.160954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:27.676 [2024-11-06 08:08:50.160966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:27.676 [2024-11-06 08:08:50.160982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:27.676 [2024-11-06 08:08:50.161166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 285.316 ms, result 0 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:28.655 Remove shared memory files 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81996 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:28.655 08:08:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:28.655 ************************************ 00:32:28.656 END TEST ftl_upgrade_shutdown 00:32:28.656 ************************************ 00:32:28.656 00:32:28.656 real 1m26.011s 00:32:28.656 user 1m57.102s 00:32:28.656 sys 0m26.664s 00:32:28.656 08:08:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:28.656 08:08:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:28.656 Process with pid 74330 is not found 00:32:28.656 08:08:51 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:28.656 08:08:51 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:28.656 08:08:51 ftl -- ftl/ftl.sh@14 -- # killprocess 74330 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@950 -- # '[' -z 74330 ']' 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@954 -- # kill -0 74330 00:32:28.656 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74330) - No such process 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 74330 is not found' 00:32:28.656 08:08:51 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:28.656 08:08:51 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:28.656 08:08:51 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82438 00:32:28.656 08:08:51 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82438 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@831 -- # '[' -z 82438 ']' 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:28.656 08:08:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:28.915 [2024-11-06 08:08:51.311363] Starting SPDK v25.01-pre git sha1 ca5713c38 / DPDK 24.03.0 initialization... 00:32:28.915 [2024-11-06 08:08:51.311650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82438 ] 00:32:28.915 [2024-11-06 08:08:51.476581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.173 [2024-11-06 08:08:51.583088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.741 08:08:52 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:29.741 08:08:52 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:29.741 08:08:52 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:30.001 nvme0n1 00:32:30.001 08:08:52 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:30.001 08:08:52 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:30.001 08:08:52 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:30.574 08:08:52 ftl -- ftl/common.sh@28 -- # stores=5c22f57b-f155-4f94-8c9a-55efcc8343ed 00:32:30.574 08:08:52 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:30.574 08:08:52 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c22f57b-f155-4f94-8c9a-55efcc8343ed 00:32:30.839 08:08:53 ftl -- ftl/ftl.sh@23 -- # killprocess 82438 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@950 -- # '[' -z 82438 ']' 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@954 -- # kill -0 82438 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@955 -- # uname 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82438 00:32:30.839 killing process with pid 82438 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82438' 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@969 -- # kill 82438 00:32:30.839 08:08:53 ftl -- common/autotest_common.sh@974 -- # wait 82438 00:32:32.744 08:08:55 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:32.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:32.744 Waiting for block devices as requested 00:32:33.002 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:33.002 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:33.002 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:33.002 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:38.275 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:38.275 Remove shared memory files 00:32:38.275 08:09:00 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:38.275 08:09:00 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:38.275 08:09:00 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:38.275 08:09:00 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:38.275 08:09:00 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:38.275 08:09:00 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:38.275 08:09:00 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:38.275 ************************************ 00:32:38.275 END TEST ftl 00:32:38.275 ************************************ 00:32:38.275 00:32:38.275 real 12m8.457s 00:32:38.275 user 15m10.392s 00:32:38.275 sys 1m38.372s 00:32:38.275 08:09:00 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:38.275 08:09:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:38.275 08:09:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:38.275 08:09:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:38.275 08:09:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:38.275 08:09:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:38.275 08:09:00 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:38.275 08:09:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:38.275 08:09:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:38.275 08:09:00 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:38.275 08:09:00 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:38.275 08:09:00 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:38.275 08:09:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:38.275 08:09:00 -- common/autotest_common.sh@10 -- # set +x 00:32:38.275 08:09:00 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:38.275 08:09:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:38.275 08:09:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:38.275 08:09:00 -- common/autotest_common.sh@10 -- # set +x 00:32:40.178 INFO: APP EXITING 00:32:40.178 INFO: killing all VMs 00:32:40.178 INFO: killing vhost app 00:32:40.178 INFO: EXIT DONE 00:32:40.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:41.003 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:41.003 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:41.003 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:41.003 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:41.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:41.829 Cleaning 00:32:41.829 Removing: /var/run/dpdk/spdk0/config 00:32:41.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:41.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:41.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:41.829 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:41.829 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:41.829 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:41.829 Removing: /var/run/dpdk/spdk0 00:32:41.829 Removing: /var/run/dpdk/spdk_pid57805 00:32:41.829 Removing: /var/run/dpdk/spdk_pid58040 00:32:41.829 Removing: /var/run/dpdk/spdk_pid58269 00:32:41.829 Removing: /var/run/dpdk/spdk_pid58373 00:32:41.829 Removing: /var/run/dpdk/spdk_pid58429 00:32:41.829 Removing: /var/run/dpdk/spdk_pid58563 00:32:41.829 Removing: /var/run/dpdk/spdk_pid58586 00:32:41.829 Removing: /var/run/dpdk/spdk_pid58796 00:32:41.829 Removing: /var/run/dpdk/spdk_pid58908 00:32:41.829 Removing: /var/run/dpdk/spdk_pid59020 00:32:41.829 Removing: /var/run/dpdk/spdk_pid59142 00:32:41.829 Removing: /var/run/dpdk/spdk_pid59250 00:32:41.829 Removing: /var/run/dpdk/spdk_pid59295 00:32:41.829 Removing: /var/run/dpdk/spdk_pid59332 00:32:41.829 Removing: /var/run/dpdk/spdk_pid59408 00:32:41.830 Removing: /var/run/dpdk/spdk_pid59519 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60007 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60084 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60158 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60175 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60329 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60350 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60504 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60520 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60595 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60613 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60683 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60706 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60907 00:32:41.830 Removing: /var/run/dpdk/spdk_pid60951 00:32:41.830 Removing: /var/run/dpdk/spdk_pid61034 00:32:41.830 Removing: /var/run/dpdk/spdk_pid61223 00:32:41.830 Removing: /var/run/dpdk/spdk_pid61318 00:32:41.830 Removing: /var/run/dpdk/spdk_pid61368 00:32:41.830 Removing: /var/run/dpdk/spdk_pid61844 00:32:41.830 Removing: /var/run/dpdk/spdk_pid61952 00:32:41.830 Removing: /var/run/dpdk/spdk_pid62062 00:32:41.830 Removing: /var/run/dpdk/spdk_pid62121 00:32:41.830 Removing: /var/run/dpdk/spdk_pid62152 00:32:41.830 Removing: /var/run/dpdk/spdk_pid62236 00:32:41.830 Removing: /var/run/dpdk/spdk_pid62869 00:32:41.830 Removing: /var/run/dpdk/spdk_pid62915 00:32:41.830 Removing: /var/run/dpdk/spdk_pid63429 00:32:41.830 Removing: /var/run/dpdk/spdk_pid63531 00:32:41.830 Removing: /var/run/dpdk/spdk_pid63646 00:32:41.830 Removing: /var/run/dpdk/spdk_pid63706 00:32:41.830 Removing: /var/run/dpdk/spdk_pid63726 00:32:41.830 Removing: /var/run/dpdk/spdk_pid63757 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65645 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65793 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65799 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65817 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65862 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65866 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65878 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65923 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65927 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65939 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65989 00:32:41.830 Removing: /var/run/dpdk/spdk_pid65993 00:32:41.830 Removing: /var/run/dpdk/spdk_pid66005 00:32:41.830 Removing: /var/run/dpdk/spdk_pid67386 00:32:41.830 Removing: /var/run/dpdk/spdk_pid67504 00:32:41.830 Removing: /var/run/dpdk/spdk_pid68933 00:32:41.830 Removing: /var/run/dpdk/spdk_pid70299 00:32:41.830 Removing: /var/run/dpdk/spdk_pid70442 00:32:41.830 Removing: /var/run/dpdk/spdk_pid70580 00:32:41.830 Removing: /var/run/dpdk/spdk_pid70708 00:32:41.830 Removing: /var/run/dpdk/spdk_pid70862 00:32:41.830 Removing: /var/run/dpdk/spdk_pid70936 00:32:41.830 Removing: /var/run/dpdk/spdk_pid71084 00:32:41.830 Removing: /var/run/dpdk/spdk_pid71460 00:32:41.830 Removing: /var/run/dpdk/spdk_pid71508 00:32:41.830 Removing: /var/run/dpdk/spdk_pid71993 00:32:41.830 Removing: /var/run/dpdk/spdk_pid72186 00:32:41.830 Removing: /var/run/dpdk/spdk_pid72288 00:32:41.830 Removing: /var/run/dpdk/spdk_pid72405 00:32:41.830 Removing: /var/run/dpdk/spdk_pid72465 00:32:42.089 Removing: /var/run/dpdk/spdk_pid72490 00:32:42.089 Removing: /var/run/dpdk/spdk_pid72781 00:32:42.089 Removing: /var/run/dpdk/spdk_pid72847 00:32:42.089 Removing: /var/run/dpdk/spdk_pid72943 00:32:42.089 Removing: /var/run/dpdk/spdk_pid73381 00:32:42.089 Removing: /var/run/dpdk/spdk_pid73523 00:32:42.089 Removing: /var/run/dpdk/spdk_pid74330 00:32:42.089 Removing: /var/run/dpdk/spdk_pid74473 00:32:42.089 Removing: /var/run/dpdk/spdk_pid74682 00:32:42.089 Removing: /var/run/dpdk/spdk_pid74804 00:32:42.089 Removing: /var/run/dpdk/spdk_pid75207 00:32:42.089 Removing: /var/run/dpdk/spdk_pid75493 00:32:42.089 Removing: /var/run/dpdk/spdk_pid75856 00:32:42.089 Removing: /var/run/dpdk/spdk_pid76078 00:32:42.089 Removing: /var/run/dpdk/spdk_pid76222 00:32:42.089 Removing: /var/run/dpdk/spdk_pid76290 00:32:42.089 Removing: /var/run/dpdk/spdk_pid76436 00:32:42.089 Removing: /var/run/dpdk/spdk_pid76478 00:32:42.089 Removing: /var/run/dpdk/spdk_pid76547 00:32:42.089 Removing: /var/run/dpdk/spdk_pid76772 00:32:42.089 Removing: /var/run/dpdk/spdk_pid77036 00:32:42.089 Removing: /var/run/dpdk/spdk_pid77502 00:32:42.089 Removing: /var/run/dpdk/spdk_pid77976 00:32:42.089 Removing: /var/run/dpdk/spdk_pid78484 00:32:42.089 Removing: /var/run/dpdk/spdk_pid79006 00:32:42.089 Removing: /var/run/dpdk/spdk_pid79154 00:32:42.089 Removing: /var/run/dpdk/spdk_pid79247 00:32:42.089 Removing: /var/run/dpdk/spdk_pid79928 00:32:42.089 Removing: /var/run/dpdk/spdk_pid79993 00:32:42.089 Removing: /var/run/dpdk/spdk_pid80471 00:32:42.089 Removing: /var/run/dpdk/spdk_pid80874 00:32:42.089 Removing: /var/run/dpdk/spdk_pid81446 00:32:42.089 Removing: /var/run/dpdk/spdk_pid81574 00:32:42.089 Removing: /var/run/dpdk/spdk_pid81620 00:32:42.089 Removing: /var/run/dpdk/spdk_pid81680 00:32:42.089 Removing: /var/run/dpdk/spdk_pid81737 00:32:42.089 Removing: /var/run/dpdk/spdk_pid81801 00:32:42.089 Removing: /var/run/dpdk/spdk_pid81996 00:32:42.089 Removing: /var/run/dpdk/spdk_pid82070 00:32:42.089 Removing: /var/run/dpdk/spdk_pid82137 00:32:42.089 Removing: /var/run/dpdk/spdk_pid82200 00:32:42.089 Removing: /var/run/dpdk/spdk_pid82239 00:32:42.089 Removing: /var/run/dpdk/spdk_pid82303 00:32:42.089 Removing: /var/run/dpdk/spdk_pid82438 00:32:42.089 Clean 00:32:42.089 08:09:04 -- common/autotest_common.sh@1449 -- # return 0 00:32:42.089 08:09:04 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:42.089 08:09:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:42.089 08:09:04 -- common/autotest_common.sh@10 -- # set +x 00:32:42.089 08:09:04 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:42.089 08:09:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:42.089 08:09:04 -- common/autotest_common.sh@10 -- # set +x 00:32:42.348 08:09:04 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:42.348 08:09:04 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:42.348 08:09:04 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:42.348 08:09:04 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:42.348 08:09:04 -- spdk/autotest.sh@394 -- # hostname 00:32:42.348 08:09:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:42.607 geninfo: WARNING: invalid characters removed from testname! 00:33:04.536 08:09:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:07.823 08:09:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:09.767 08:09:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:12.299 08:09:34 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:14.832 08:09:36 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:16.736 08:09:39 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:19.268 08:09:41 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:19.268 08:09:41 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:33:19.268 08:09:41 -- common/autotest_common.sh@1689 -- $ lcov --version 00:33:19.268 08:09:41 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:33:19.268 08:09:41 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:33:19.268 08:09:41 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:33:19.268 08:09:41 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:33:19.268 08:09:41 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:33:19.268 08:09:41 -- scripts/common.sh@336 -- $ IFS=.-: 00:33:19.268 08:09:41 -- scripts/common.sh@336 -- $ read -ra ver1 00:33:19.268 08:09:41 -- scripts/common.sh@337 -- $ IFS=.-: 00:33:19.268 08:09:41 -- scripts/common.sh@337 -- $ read -ra ver2 00:33:19.268 08:09:41 -- scripts/common.sh@338 -- $ local 'op=<' 00:33:19.268 08:09:41 -- scripts/common.sh@340 -- $ ver1_l=2 00:33:19.268 08:09:41 -- scripts/common.sh@341 -- $ ver2_l=1 00:33:19.268 08:09:41 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:33:19.268 08:09:41 -- scripts/common.sh@344 -- $ case "$op" in 00:33:19.268 08:09:41 -- scripts/common.sh@345 -- $ : 1 00:33:19.268 08:09:41 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:33:19.268 08:09:41 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.268 08:09:41 -- scripts/common.sh@365 -- $ decimal 1 00:33:19.268 08:09:41 -- scripts/common.sh@353 -- $ local d=1 00:33:19.268 08:09:41 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:33:19.268 08:09:41 -- scripts/common.sh@355 -- $ echo 1 00:33:19.268 08:09:41 -- scripts/common.sh@365 -- $ ver1[v]=1 00:33:19.268 08:09:41 -- scripts/common.sh@366 -- $ decimal 2 00:33:19.268 08:09:41 -- scripts/common.sh@353 -- $ local d=2 00:33:19.268 08:09:41 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:33:19.268 08:09:41 -- scripts/common.sh@355 -- $ echo 2 00:33:19.268 08:09:41 -- scripts/common.sh@366 -- $ ver2[v]=2 00:33:19.268 08:09:41 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:33:19.268 08:09:41 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:33:19.268 08:09:41 -- scripts/common.sh@368 -- $ return 0 00:33:19.268 08:09:41 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.268 08:09:41 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:33:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.268 --rc genhtml_branch_coverage=1 00:33:19.268 --rc genhtml_function_coverage=1 00:33:19.268 --rc genhtml_legend=1 00:33:19.268 --rc geninfo_all_blocks=1 00:33:19.268 --rc geninfo_unexecuted_blocks=1 00:33:19.268 00:33:19.268 ' 00:33:19.268 08:09:41 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:33:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.268 --rc genhtml_branch_coverage=1 00:33:19.268 --rc genhtml_function_coverage=1 00:33:19.268 --rc genhtml_legend=1 00:33:19.268 --rc geninfo_all_blocks=1 00:33:19.268 --rc geninfo_unexecuted_blocks=1 00:33:19.268 00:33:19.268 ' 00:33:19.268 08:09:41 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:33:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.268 --rc genhtml_branch_coverage=1 00:33:19.268 --rc genhtml_function_coverage=1 00:33:19.268 --rc genhtml_legend=1 00:33:19.268 --rc geninfo_all_blocks=1 00:33:19.268 --rc geninfo_unexecuted_blocks=1 00:33:19.268 00:33:19.268 ' 00:33:19.268 08:09:41 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:33:19.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.268 --rc genhtml_branch_coverage=1 00:33:19.268 --rc genhtml_function_coverage=1 00:33:19.268 --rc genhtml_legend=1 00:33:19.268 --rc geninfo_all_blocks=1 00:33:19.268 --rc geninfo_unexecuted_blocks=1 00:33:19.268 00:33:19.268 ' 00:33:19.268 08:09:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:19.268 08:09:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:33:19.268 08:09:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:19.268 08:09:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.268 08:09:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.268 08:09:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.268 08:09:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.268 08:09:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.268 08:09:41 -- paths/export.sh@5 -- $ export PATH 00:33:19.268 08:09:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.268 08:09:41 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:19.268 08:09:41 -- common/autobuild_common.sh@486 -- $ date +%s 00:33:19.268 08:09:41 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730880581.XXXXXX 00:33:19.268 08:09:41 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730880581.xRkdNf 00:33:19.268 08:09:41 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:33:19.268 08:09:41 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:33:19.268 08:09:41 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:19.268 08:09:41 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:19.268 08:09:41 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:19.268 08:09:41 -- common/autobuild_common.sh@502 -- $ get_config_params 00:33:19.268 08:09:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:33:19.268 08:09:41 -- common/autotest_common.sh@10 -- $ set +x 00:33:19.268 08:09:41 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:33:19.268 08:09:41 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:33:19.268 08:09:41 -- pm/common@17 -- $ local monitor 00:33:19.268 08:09:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:19.268 08:09:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:19.268 08:09:41 -- pm/common@25 -- $ sleep 1 00:33:19.268 08:09:41 -- pm/common@21 -- $ date +%s 00:33:19.268 08:09:41 -- pm/common@21 -- $ date +%s 00:33:19.268 08:09:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1730880581 00:33:19.268 08:09:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1730880581 00:33:19.527 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1730880581_collect-cpu-load.pm.log 00:33:19.527 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1730880581_collect-vmstat.pm.log 00:33:20.463 08:09:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:33:20.463 08:09:42 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:33:20.463 08:09:42 -- spdk/autopackage.sh@14 -- $ timing_finish 00:33:20.463 08:09:42 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:20.463 08:09:42 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:20.463 08:09:42 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:20.463 08:09:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:20.463 08:09:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:20.463 08:09:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:20.463 08:09:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:20.463 08:09:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:33:20.463 08:09:42 -- pm/common@44 -- $ pid=84154 00:33:20.463 08:09:42 -- pm/common@50 -- $ kill -TERM 84154 00:33:20.463 08:09:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:20.463 08:09:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:33:20.463 08:09:42 -- pm/common@44 -- $ pid=84156 00:33:20.463 08:09:42 -- pm/common@50 -- $ kill -TERM 84156 00:33:20.463 + [[ -n 5404 ]] 00:33:20.463 + sudo kill 5404 00:33:20.472 [Pipeline] } 00:33:20.486 [Pipeline] // timeout 00:33:20.491 [Pipeline] } 00:33:20.505 [Pipeline] // stage 00:33:20.511 [Pipeline] } 00:33:20.524 [Pipeline] // catchError 00:33:20.534 [Pipeline] stage 00:33:20.536 [Pipeline] { (Stop VM) 00:33:20.547 [Pipeline] sh 00:33:20.827 + vagrant halt 00:33:23.360 ==> default: Halting domain... 00:33:29.938 [Pipeline] sh 00:33:30.218 + vagrant destroy -f 00:33:33.518 ==> default: Removing domain... 00:33:33.543 [Pipeline] sh 00:33:33.825 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:33.834 [Pipeline] } 00:33:33.848 [Pipeline] // stage 00:33:33.853 [Pipeline] } 00:33:33.867 [Pipeline] // dir 00:33:33.873 [Pipeline] } 00:33:33.889 [Pipeline] // wrap 00:33:33.895 [Pipeline] } 00:33:33.908 [Pipeline] // catchError 00:33:33.918 [Pipeline] stage 00:33:33.920 [Pipeline] { (Epilogue) 00:33:33.934 [Pipeline] sh 00:33:34.216 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:39.499 [Pipeline] catchError 00:33:39.501 [Pipeline] { 00:33:39.513 [Pipeline] sh 00:33:39.793 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:39.793 Artifacts sizes are good 00:33:39.803 [Pipeline] } 00:33:39.818 [Pipeline] // catchError 00:33:39.829 [Pipeline] archiveArtifacts 00:33:39.836 Archiving artifacts 00:33:39.953 [Pipeline] cleanWs 00:33:39.964 [WS-CLEANUP] Deleting project workspace... 00:33:39.964 [WS-CLEANUP] Deferred wipeout is used... 00:33:39.971 [WS-CLEANUP] done 00:33:39.973 [Pipeline] } 00:33:39.988 [Pipeline] // stage 00:33:39.994 [Pipeline] } 00:33:40.008 [Pipeline] // node 00:33:40.013 [Pipeline] End of Pipeline 00:33:40.060 Finished: SUCCESS