00:00:00.000 Started by upstream project "autotest-per-patch" build number 130928 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:13.826 The recommended git tool is: git 00:00:13.827 using credential 00000000-0000-0000-0000-000000000002 00:00:13.829 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:13.846 Fetching changes from the remote Git repository 00:00:13.850 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:13.867 Using shallow fetch with depth 1 00:00:13.867 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:13.867 > git --version # timeout=10 00:00:13.882 > git --version # 'git version 2.39.2' 00:00:13.882 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:13.896 Setting http proxy: proxy-dmz.intel.com:911 00:00:13.896 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:18.414 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:18.426 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:18.439 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:18.439 > git config core.sparsecheckout # timeout=10 00:00:18.451 > git read-tree -mu HEAD # timeout=10 00:00:18.470 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:18.490 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:18.490 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:18.606 [Pipeline] Start of Pipeline 00:00:18.617 [Pipeline] library 00:00:18.618 Loading library shm_lib@master 00:00:18.619 Library shm_lib@master is cached. Copying from home. 00:00:18.631 [Pipeline] node 00:00:33.632 Still waiting to schedule task 00:00:33.633 Waiting for next available executor on ‘vagrant-vm-host’ 00:12:52.751 Running on VM-host-SM4 in /var/jenkins/workspace/nvme-vg-autotest_2 00:12:52.753 [Pipeline] { 00:12:52.765 [Pipeline] catchError 00:12:52.766 [Pipeline] { 00:12:52.782 [Pipeline] wrap 00:12:52.790 [Pipeline] { 00:12:52.797 [Pipeline] stage 00:12:52.799 [Pipeline] { (Prologue) 00:12:52.823 [Pipeline] echo 00:12:52.826 Node: VM-host-SM4 00:12:52.835 [Pipeline] cleanWs 00:12:52.846 [WS-CLEANUP] Deleting project workspace... 00:12:52.846 [WS-CLEANUP] Deferred wipeout is used... 00:12:52.850 [WS-CLEANUP] done 00:12:53.030 [Pipeline] setCustomBuildProperty 00:12:53.113 [Pipeline] httpRequest 00:12:53.512 [Pipeline] echo 00:12:53.513 Sorcerer 10.211.164.101 is alive 00:12:53.520 [Pipeline] retry 00:12:53.522 [Pipeline] { 00:12:53.531 [Pipeline] httpRequest 00:12:53.534 HttpMethod: GET 00:12:53.534 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:12:53.534 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:12:53.536 Response Code: HTTP/1.1 200 OK 00:12:53.536 Success: Status code 200 is in the accepted range: 200,404 00:12:53.537 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:12:53.683 [Pipeline] } 00:12:53.700 [Pipeline] // retry 00:12:53.707 [Pipeline] sh 00:12:53.983 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:12:53.998 [Pipeline] httpRequest 00:12:54.394 [Pipeline] echo 00:12:54.396 Sorcerer 10.211.164.101 is alive 00:12:54.405 [Pipeline] retry 00:12:54.407 [Pipeline] { 00:12:54.420 [Pipeline] httpRequest 00:12:54.424 HttpMethod: GET 00:12:54.425 URL: http://10.211.164.101/packages/spdk_716daf68301ef3125e0618c419a5d2b0b1ee270b.tar.gz 00:12:54.426 Sending request to url: http://10.211.164.101/packages/spdk_716daf68301ef3125e0618c419a5d2b0b1ee270b.tar.gz 00:12:54.427 Response Code: HTTP/1.1 200 OK 00:12:54.427 Success: Status code 200 is in the accepted range: 200,404 00:12:54.428 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_716daf68301ef3125e0618c419a5d2b0b1ee270b.tar.gz 00:12:56.660 [Pipeline] } 00:12:56.677 [Pipeline] // retry 00:12:56.684 [Pipeline] sh 00:12:56.959 + tar --no-same-owner -xf spdk_716daf68301ef3125e0618c419a5d2b0b1ee270b.tar.gz 00:13:00.244 [Pipeline] sh 00:13:00.518 + git -C spdk log --oneline -n5 00:13:00.518 716daf683 bdev/nvme: interrupt mode for PCIe nvme ctrlr 00:13:00.518 33a99df94 nvme: create, manage fd_group for nvme poll group 00:13:00.518 d49b794e4 thread: Extended options for spdk_interrupt_register 00:13:00.518 e2e9091fb util: allow a fd_group to manage all its fds 00:13:00.518 89fbd3ce7 util: fix total fds to wait for 00:13:00.534 [Pipeline] writeFile 00:13:00.548 [Pipeline] sh 00:13:00.826 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:13:00.835 [Pipeline] sh 00:13:01.137 + cat autorun-spdk.conf 00:13:01.137 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:01.137 SPDK_TEST_NVME=1 00:13:01.137 SPDK_TEST_FTL=1 00:13:01.137 SPDK_TEST_ISAL=1 00:13:01.137 SPDK_RUN_ASAN=1 00:13:01.137 SPDK_RUN_UBSAN=1 00:13:01.137 SPDK_TEST_XNVME=1 00:13:01.137 SPDK_TEST_NVME_FDP=1 00:13:01.137 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:01.142 RUN_NIGHTLY=0 00:13:01.144 [Pipeline] } 00:13:01.155 [Pipeline] // stage 00:13:01.168 [Pipeline] stage 00:13:01.170 [Pipeline] { (Run VM) 00:13:01.182 [Pipeline] sh 00:13:01.459 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:13:01.459 + echo 'Start stage prepare_nvme.sh' 00:13:01.460 Start stage prepare_nvme.sh 00:13:01.460 + [[ -n 7 ]] 00:13:01.460 + disk_prefix=ex7 00:13:01.460 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:13:01.460 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:13:01.460 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:13:01.460 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:01.460 ++ SPDK_TEST_NVME=1 00:13:01.460 ++ SPDK_TEST_FTL=1 00:13:01.460 ++ SPDK_TEST_ISAL=1 00:13:01.460 ++ SPDK_RUN_ASAN=1 00:13:01.460 ++ SPDK_RUN_UBSAN=1 00:13:01.460 ++ SPDK_TEST_XNVME=1 00:13:01.460 ++ SPDK_TEST_NVME_FDP=1 00:13:01.460 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:01.460 ++ RUN_NIGHTLY=0 00:13:01.460 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:13:01.460 + nvme_files=() 00:13:01.460 + declare -A nvme_files 00:13:01.460 + backend_dir=/var/lib/libvirt/images/backends 00:13:01.460 + nvme_files['nvme.img']=5G 00:13:01.460 + nvme_files['nvme-cmb.img']=5G 00:13:01.460 + nvme_files['nvme-multi0.img']=4G 00:13:01.460 + nvme_files['nvme-multi1.img']=4G 00:13:01.460 + nvme_files['nvme-multi2.img']=4G 00:13:01.460 + nvme_files['nvme-openstack.img']=8G 00:13:01.460 + nvme_files['nvme-zns.img']=5G 00:13:01.460 + (( SPDK_TEST_NVME_PMR == 1 )) 00:13:01.460 + (( SPDK_TEST_FTL == 1 )) 00:13:01.460 + nvme_files["nvme-ftl.img"]=6G 00:13:01.460 + (( SPDK_TEST_NVME_FDP == 1 )) 00:13:01.460 + nvme_files["nvme-fdp.img"]=1G 00:13:01.460 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:13:01.460 + for nvme in "${!nvme_files[@]}" 00:13:01.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:13:01.460 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:13:01.460 + for nvme in "${!nvme_files[@]}" 00:13:01.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:13:01.460 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:13:01.460 + for nvme in "${!nvme_files[@]}" 00:13:01.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:13:01.460 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:13:01.460 + for nvme in "${!nvme_files[@]}" 00:13:01.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:13:01.460 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:13:01.460 + for nvme in "${!nvme_files[@]}" 00:13:01.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:13:01.460 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:13:01.460 + for nvme in "${!nvme_files[@]}" 00:13:01.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:13:01.460 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:13:01.460 + for nvme in "${!nvme_files[@]}" 00:13:01.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:13:01.460 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:13:01.460 + for nvme in "${!nvme_files[@]}" 00:13:01.460 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:13:01.717 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:13:01.717 + for nvme in "${!nvme_files[@]}" 00:13:01.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:13:02.647 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:13:02.647 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:13:02.647 + echo 'End stage prepare_nvme.sh' 00:13:02.647 End stage prepare_nvme.sh 00:13:02.657 [Pipeline] sh 00:13:02.933 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:13:02.933 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:13:02.933 00:13:02.933 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:13:02.933 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:13:02.933 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:13:02.933 HELP=0 00:13:02.933 DRY_RUN=0 00:13:02.933 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:13:02.933 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:13:02.933 NVME_AUTO_CREATE=0 00:13:02.933 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:13:02.933 NVME_CMB=,,,, 00:13:02.933 NVME_PMR=,,,, 00:13:02.933 NVME_ZNS=,,,, 00:13:02.933 NVME_MS=true,,,, 00:13:02.933 NVME_FDP=,,,on, 00:13:02.933 SPDK_VAGRANT_DISTRO=fedora39 00:13:02.933 SPDK_VAGRANT_VMCPU=10 00:13:02.933 SPDK_VAGRANT_VMRAM=12288 00:13:02.933 SPDK_VAGRANT_PROVIDER=libvirt 00:13:02.933 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:13:02.933 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:13:02.933 SPDK_OPENSTACK_NETWORK=0 00:13:02.933 VAGRANT_PACKAGE_BOX=0 00:13:02.933 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:13:02.933 FORCE_DISTRO=true 00:13:02.933 VAGRANT_BOX_VERSION= 00:13:02.933 EXTRA_VAGRANTFILES= 00:13:02.933 NIC_MODEL=e1000 00:13:02.933 00:13:02.933 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:13:02.933 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:13:07.108 Bringing machine 'default' up with 'libvirt' provider... 00:13:07.674 ==> default: Creating image (snapshot of base box volume). 00:13:07.932 ==> default: Creating domain with the following settings... 00:13:07.932 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728412836_576daf91e95c9a9e3465 00:13:07.932 ==> default: -- Domain type: kvm 00:13:07.932 ==> default: -- Cpus: 10 00:13:07.932 ==> default: -- Feature: acpi 00:13:07.932 ==> default: -- Feature: apic 00:13:07.932 ==> default: -- Feature: pae 00:13:07.932 ==> default: -- Memory: 12288M 00:13:07.932 ==> default: -- Memory Backing: hugepages: 00:13:07.932 ==> default: -- Management MAC: 00:13:07.932 ==> default: -- Loader: 00:13:07.932 ==> default: -- Nvram: 00:13:07.932 ==> default: -- Base box: spdk/fedora39 00:13:07.932 ==> default: -- Storage pool: default 00:13:07.932 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728412836_576daf91e95c9a9e3465.img (20G) 00:13:07.932 ==> default: -- Volume Cache: default 00:13:07.932 ==> default: -- Kernel: 00:13:07.932 ==> default: -- Initrd: 00:13:07.932 ==> default: -- Graphics Type: vnc 00:13:07.932 ==> default: -- Graphics Port: -1 00:13:07.932 ==> default: -- Graphics IP: 127.0.0.1 00:13:07.932 ==> default: -- Graphics Password: Not defined 00:13:07.932 ==> default: -- Video Type: cirrus 00:13:07.932 ==> default: -- Video VRAM: 9216 00:13:07.932 ==> default: -- Sound Type: 00:13:07.932 ==> default: -- Keymap: en-us 00:13:07.932 ==> default: -- TPM Path: 00:13:07.932 ==> default: -- INPUT: type=mouse, bus=ps2 00:13:07.932 ==> default: -- Command line args: 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:13:07.932 ==> default: -> value=-drive, 00:13:07.932 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:13:07.932 ==> default: -> value=-drive, 00:13:07.932 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:13:07.932 ==> default: -> value=-drive, 00:13:07.932 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:07.932 ==> default: -> value=-drive, 00:13:07.932 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:07.932 ==> default: -> value=-drive, 00:13:07.932 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:13:07.932 ==> default: -> value=-drive, 00:13:07.932 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:13:07.932 ==> default: -> value=-device, 00:13:07.932 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:08.191 ==> default: Creating shared folders metadata... 00:13:08.191 ==> default: Starting domain. 00:13:10.094 ==> default: Waiting for domain to get an IP address... 00:13:28.165 ==> default: Waiting for SSH to become available... 00:13:28.165 ==> default: Configuring and enabling network interfaces... 00:13:32.366 default: SSH address: 192.168.121.186:22 00:13:32.366 default: SSH username: vagrant 00:13:32.366 default: SSH auth method: private key 00:13:34.896 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:13:43.077 ==> default: Mounting SSHFS shared folder... 00:13:44.451 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:13:44.451 ==> default: Checking Mount.. 00:13:45.854 ==> default: Folder Successfully Mounted! 00:13:45.854 ==> default: Running provisioner: file... 00:13:46.789 default: ~/.gitconfig => .gitconfig 00:13:47.048 00:13:47.048 SUCCESS! 00:13:47.048 00:13:47.048 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:13:47.048 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:13:47.048 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:13:47.048 00:13:47.056 [Pipeline] } 00:13:47.071 [Pipeline] // stage 00:13:47.080 [Pipeline] dir 00:13:47.081 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:13:47.083 [Pipeline] { 00:13:47.095 [Pipeline] catchError 00:13:47.096 [Pipeline] { 00:13:47.108 [Pipeline] sh 00:13:47.386 + vagrant ssh-config --host vagrant 00:13:47.386 + sed -ne /^Host/,$p 00:13:47.386 + tee ssh_conf 00:13:51.651 Host vagrant 00:13:51.651 HostName 192.168.121.186 00:13:51.651 User vagrant 00:13:51.651 Port 22 00:13:51.651 UserKnownHostsFile /dev/null 00:13:51.651 StrictHostKeyChecking no 00:13:51.651 PasswordAuthentication no 00:13:51.651 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:13:51.651 IdentitiesOnly yes 00:13:51.651 LogLevel FATAL 00:13:51.651 ForwardAgent yes 00:13:51.651 ForwardX11 yes 00:13:51.651 00:13:51.665 [Pipeline] withEnv 00:13:51.667 [Pipeline] { 00:13:51.681 [Pipeline] sh 00:13:51.961 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:13:51.961 source /etc/os-release 00:13:51.961 [[ -e /image.version ]] && img=$(< /image.version) 00:13:51.961 # Minimal, systemd-like check. 00:13:51.961 if [[ -e /.dockerenv ]]; then 00:13:51.961 # Clear garbage from the node's name: 00:13:51.961 # agt-er_autotest_547-896 -> autotest_547-896 00:13:51.961 # $HOSTNAME is the actual container id 00:13:51.961 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:13:51.961 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:13:51.961 # We can assume this is a mount from a host where container is running, 00:13:51.961 # so fetch its hostname to easily identify the target swarm worker. 00:13:51.961 container="$(< /etc/hostname) ($agent)" 00:13:51.961 else 00:13:51.961 # Fallback 00:13:51.961 container=$agent 00:13:51.961 fi 00:13:51.961 fi 00:13:51.961 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:13:51.961 00:13:52.231 [Pipeline] } 00:13:52.249 [Pipeline] // withEnv 00:13:52.258 [Pipeline] setCustomBuildProperty 00:13:52.271 [Pipeline] stage 00:13:52.274 [Pipeline] { (Tests) 00:13:52.288 [Pipeline] sh 00:13:52.567 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:13:52.853 [Pipeline] sh 00:13:53.132 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:13:53.407 [Pipeline] timeout 00:13:53.407 Timeout set to expire in 50 min 00:13:53.409 [Pipeline] { 00:13:53.423 [Pipeline] sh 00:13:53.704 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:13:54.271 HEAD is now at 716daf683 bdev/nvme: interrupt mode for PCIe nvme ctrlr 00:13:54.282 [Pipeline] sh 00:13:54.561 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:13:54.830 [Pipeline] sh 00:13:55.107 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:13:55.379 [Pipeline] sh 00:13:55.658 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:13:55.916 ++ readlink -f spdk_repo 00:13:55.916 + DIR_ROOT=/home/vagrant/spdk_repo 00:13:55.916 + [[ -n /home/vagrant/spdk_repo ]] 00:13:55.916 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:13:55.916 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:13:55.916 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:13:55.916 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:13:55.916 + [[ -d /home/vagrant/spdk_repo/output ]] 00:13:55.916 + [[ nvme-vg-autotest == pkgdep-* ]] 00:13:55.916 + cd /home/vagrant/spdk_repo 00:13:55.916 + source /etc/os-release 00:13:55.916 ++ NAME='Fedora Linux' 00:13:55.916 ++ VERSION='39 (Cloud Edition)' 00:13:55.916 ++ ID=fedora 00:13:55.916 ++ VERSION_ID=39 00:13:55.916 ++ VERSION_CODENAME= 00:13:55.916 ++ PLATFORM_ID=platform:f39 00:13:55.916 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:13:55.916 ++ ANSI_COLOR='0;38;2;60;110;180' 00:13:55.916 ++ LOGO=fedora-logo-icon 00:13:55.916 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:13:55.916 ++ HOME_URL=https://fedoraproject.org/ 00:13:55.916 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:13:55.916 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:13:55.916 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:13:55.916 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:13:55.916 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:13:55.916 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:13:55.916 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:13:55.916 ++ SUPPORT_END=2024-11-12 00:13:55.916 ++ VARIANT='Cloud Edition' 00:13:55.916 ++ VARIANT_ID=cloud 00:13:55.916 + uname -a 00:13:55.916 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:13:55.916 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:56.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:56.431 Hugepages 00:13:56.431 node hugesize free / total 00:13:56.431 node0 1048576kB 0 / 0 00:13:56.431 node0 2048kB 0 / 0 00:13:56.431 00:13:56.431 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:56.689 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:56.689 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:13:56.689 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:13:56.689 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:13:56.689 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:13:56.689 + rm -f /tmp/spdk-ld-path 00:13:56.689 + source autorun-spdk.conf 00:13:56.689 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:56.689 ++ SPDK_TEST_NVME=1 00:13:56.689 ++ SPDK_TEST_FTL=1 00:13:56.689 ++ SPDK_TEST_ISAL=1 00:13:56.689 ++ SPDK_RUN_ASAN=1 00:13:56.689 ++ SPDK_RUN_UBSAN=1 00:13:56.689 ++ SPDK_TEST_XNVME=1 00:13:56.689 ++ SPDK_TEST_NVME_FDP=1 00:13:56.689 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:56.689 ++ RUN_NIGHTLY=0 00:13:56.689 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:56.689 + [[ -n '' ]] 00:13:56.689 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:13:56.689 + for M in /var/spdk/build-*-manifest.txt 00:13:56.689 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:13:56.689 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:56.689 + for M in /var/spdk/build-*-manifest.txt 00:13:56.689 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:56.689 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:56.689 + for M in /var/spdk/build-*-manifest.txt 00:13:56.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:56.689 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:56.689 ++ uname 00:13:56.689 + [[ Linux == \L\i\n\u\x ]] 00:13:56.689 + sudo dmesg -T 00:13:56.689 + sudo dmesg --clear 00:13:56.689 + dmesg_pid=5304 00:13:56.689 + [[ Fedora Linux == FreeBSD ]] 00:13:56.689 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:56.689 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:56.689 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:56.689 + [[ -x /usr/src/fio-static/fio ]] 00:13:56.689 + sudo dmesg -Tw 00:13:56.689 + export FIO_BIN=/usr/src/fio-static/fio 00:13:56.689 + FIO_BIN=/usr/src/fio-static/fio 00:13:56.689 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:56.689 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:56.689 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:56.689 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:56.689 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:56.689 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:56.689 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:56.689 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:56.689 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:56.948 Test configuration: 00:13:56.948 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:56.948 SPDK_TEST_NVME=1 00:13:56.948 SPDK_TEST_FTL=1 00:13:56.948 SPDK_TEST_ISAL=1 00:13:56.948 SPDK_RUN_ASAN=1 00:13:56.948 SPDK_RUN_UBSAN=1 00:13:56.948 SPDK_TEST_XNVME=1 00:13:56.948 SPDK_TEST_NVME_FDP=1 00:13:56.948 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:56.948 RUN_NIGHTLY=0 18:41:25 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:13:56.948 18:41:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.948 18:41:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:13:56.948 18:41:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:56.948 18:41:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.948 18:41:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.948 18:41:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.948 18:41:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.948 18:41:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.948 18:41:25 -- paths/export.sh@5 -- $ export PATH 00:13:56.948 18:41:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.948 18:41:25 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:13:56.948 18:41:25 -- common/autobuild_common.sh@486 -- $ date +%s 00:13:56.948 18:41:25 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728412885.XXXXXX 00:13:56.948 18:41:25 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728412885.MYLiyA 00:13:56.948 18:41:25 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:13:56.948 18:41:25 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:13:56.948 18:41:25 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:13:56.948 18:41:25 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:56.948 18:41:25 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:56.948 18:41:25 -- common/autobuild_common.sh@502 -- $ get_config_params 00:13:56.948 18:41:25 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:13:56.948 18:41:25 -- common/autotest_common.sh@10 -- $ set +x 00:13:56.948 18:41:25 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:13:56.948 18:41:25 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:13:56.948 18:41:25 -- pm/common@17 -- $ local monitor 00:13:56.948 18:41:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:56.948 18:41:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:56.948 18:41:25 -- pm/common@25 -- $ sleep 1 00:13:56.948 18:41:25 -- pm/common@21 -- $ date +%s 00:13:56.948 18:41:25 -- pm/common@21 -- $ date +%s 00:13:56.948 18:41:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728412885 00:13:56.948 18:41:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728412885 00:13:56.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728412885_collect-vmstat.pm.log 00:13:56.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728412885_collect-cpu-load.pm.log 00:13:57.884 18:41:26 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:13:57.884 18:41:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:57.884 18:41:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:57.884 18:41:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:13:57.884 18:41:26 -- spdk/autobuild.sh@16 -- $ date -u 00:13:57.884 Tue Oct 8 06:41:26 PM UTC 2024 00:13:57.884 18:41:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:57.884 v25.01-pre-53-g716daf683 00:13:57.884 18:41:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:13:57.884 18:41:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:13:57.884 18:41:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:13:57.884 18:41:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:13:57.884 18:41:26 -- common/autotest_common.sh@10 -- $ set +x 00:13:57.884 ************************************ 00:13:57.884 START TEST asan 00:13:57.884 ************************************ 00:13:57.884 using asan 00:13:57.884 18:41:26 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:13:57.884 00:13:57.884 real 0m0.000s 00:13:57.884 user 0m0.000s 00:13:57.884 sys 0m0.000s 00:13:57.884 18:41:26 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:13:57.884 18:41:26 asan -- common/autotest_common.sh@10 -- $ set +x 00:13:57.884 ************************************ 00:13:57.884 END TEST asan 00:13:57.884 ************************************ 00:13:58.144 18:41:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:13:58.144 18:41:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:13:58.144 18:41:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:13:58.144 18:41:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:13:58.144 18:41:26 -- common/autotest_common.sh@10 -- $ set +x 00:13:58.144 ************************************ 00:13:58.144 START TEST ubsan 00:13:58.144 ************************************ 00:13:58.144 using ubsan 00:13:58.144 18:41:26 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:13:58.144 00:13:58.144 real 0m0.000s 00:13:58.144 user 0m0.000s 00:13:58.144 sys 0m0.000s 00:13:58.144 18:41:26 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:13:58.144 18:41:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:13:58.144 ************************************ 00:13:58.144 END TEST ubsan 00:13:58.144 ************************************ 00:13:58.144 18:41:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:13:58.144 18:41:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:13:58.144 18:41:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:13:58.144 18:41:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:13:58.144 18:41:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:13:58.144 18:41:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:13:58.144 18:41:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:13:58.144 18:41:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:13:58.144 18:41:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:13:58.144 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:58.144 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:58.710 Using 'verbs' RDMA provider 00:14:14.522 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:14:29.443 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:14:29.443 Creating mk/config.mk...done. 00:14:29.443 Creating mk/cc.flags.mk...done. 00:14:29.443 Type 'make' to build. 00:14:29.443 18:41:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:14:29.443 18:41:56 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:14:29.443 18:41:56 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:14:29.443 18:41:56 -- common/autotest_common.sh@10 -- $ set +x 00:14:29.443 ************************************ 00:14:29.443 START TEST make 00:14:29.443 ************************************ 00:14:29.443 18:41:56 make -- common/autotest_common.sh@1125 -- $ make -j10 00:14:29.443 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:14:29.443 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:14:29.443 meson setup builddir \ 00:14:29.443 -Dwith-libaio=enabled \ 00:14:29.443 -Dwith-liburing=enabled \ 00:14:29.443 -Dwith-libvfn=disabled \ 00:14:29.443 -Dwith-spdk=false && \ 00:14:29.443 meson compile -C builddir && \ 00:14:29.443 cd -) 00:14:29.443 make[1]: Nothing to be done for 'all'. 00:14:32.069 The Meson build system 00:14:32.069 Version: 1.5.0 00:14:32.069 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:14:32.069 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:14:32.069 Build type: native build 00:14:32.069 Project name: xnvme 00:14:32.069 Project version: 0.7.3 00:14:32.069 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:14:32.069 C linker for the host machine: cc ld.bfd 2.40-14 00:14:32.069 Host machine cpu family: x86_64 00:14:32.069 Host machine cpu: x86_64 00:14:32.069 Message: host_machine.system: linux 00:14:32.069 Compiler for C supports arguments -Wno-missing-braces: YES 00:14:32.069 Compiler for C supports arguments -Wno-cast-function-type: YES 00:14:32.069 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:14:32.069 Run-time dependency threads found: YES 00:14:32.069 Has header "setupapi.h" : NO 00:14:32.069 Has header "linux/blkzoned.h" : YES 00:14:32.069 Has header "linux/blkzoned.h" : YES (cached) 00:14:32.069 Has header "libaio.h" : YES 00:14:32.069 Library aio found: YES 00:14:32.069 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:14:32.069 Run-time dependency liburing found: YES 2.2 00:14:32.069 Dependency libvfn skipped: feature with-libvfn disabled 00:14:32.069 Run-time dependency appleframeworks found: NO (tried framework) 00:14:32.069 Run-time dependency appleframeworks found: NO (tried framework) 00:14:32.069 Configuring xnvme_config.h using configuration 00:14:32.069 Configuring xnvme.spec using configuration 00:14:32.069 Run-time dependency bash-completion found: YES 2.11 00:14:32.069 Message: Bash-completions: /usr/share/bash-completion/completions 00:14:32.069 Program cp found: YES (/usr/bin/cp) 00:14:32.069 Has header "winsock2.h" : NO 00:14:32.069 Has header "dbghelp.h" : NO 00:14:32.069 Library rpcrt4 found: NO 00:14:32.069 Library rt found: YES 00:14:32.069 Checking for function "clock_gettime" with dependency -lrt: YES 00:14:32.069 Found CMake: /usr/bin/cmake (3.27.7) 00:14:32.069 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:14:32.069 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:14:32.069 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:14:32.069 Build targets in project: 32 00:14:32.069 00:14:32.069 xnvme 0.7.3 00:14:32.069 00:14:32.069 User defined options 00:14:32.069 with-libaio : enabled 00:14:32.069 with-liburing: enabled 00:14:32.069 with-libvfn : disabled 00:14:32.069 with-spdk : false 00:14:32.069 00:14:32.069 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:14:32.328 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:14:32.328 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:14:32.586 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:14:32.586 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:14:32.586 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:14:32.586 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:14:32.586 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:14:32.586 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:14:32.587 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:14:32.587 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:14:32.587 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:14:32.587 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:14:32.587 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:14:32.587 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:14:32.844 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:14:32.844 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:14:32.844 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:14:32.844 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:14:32.844 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:14:32.844 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:14:32.844 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:14:32.844 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:14:32.844 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:14:32.844 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:14:32.844 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:14:32.844 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:14:32.844 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:14:32.844 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:14:32.844 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:14:32.844 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:14:32.845 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:14:33.103 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:14:33.103 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:14:33.103 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:14:33.103 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:14:33.103 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:14:33.103 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:14:33.103 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:14:33.103 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:14:33.103 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:14:33.103 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:14:33.103 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:14:33.103 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:14:33.103 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:14:33.103 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:14:33.103 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:14:33.103 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:14:33.103 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:14:33.103 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:14:33.103 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:14:33.103 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:14:33.103 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:14:33.103 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:14:33.103 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:14:33.103 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:14:33.361 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:14:33.361 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:14:33.361 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:14:33.361 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:14:33.361 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:14:33.361 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:14:33.361 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:14:33.361 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:14:33.361 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:14:33.361 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:14:33.361 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:14:33.361 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:14:33.361 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:14:33.619 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:14:33.619 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:14:33.619 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:14:33.619 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:14:33.619 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:14:33.619 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:14:33.619 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:14:33.619 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:14:33.619 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:14:33.619 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:14:33.619 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:14:33.619 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:14:33.619 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:14:33.877 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:14:33.877 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:14:33.877 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:14:33.877 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:14:33.877 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:14:33.877 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:14:33.877 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:14:33.877 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:14:33.877 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:14:33.877 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:14:34.136 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:14:34.136 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:14:34.136 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:14:34.136 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:14:34.136 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:14:34.136 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:14:34.136 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:14:34.136 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:14:34.136 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:14:34.136 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:14:34.136 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:14:34.136 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:14:34.136 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:14:34.136 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:14:34.136 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:14:34.136 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:14:34.136 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:14:34.136 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:14:34.136 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:14:34.136 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:14:34.136 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:14:34.136 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:14:34.395 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:14:34.395 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:14:34.395 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:14:34.395 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:14:34.395 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:14:34.395 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:14:34.395 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:14:34.395 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:14:34.395 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:14:34.395 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:14:34.395 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:14:34.395 [124/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:14:34.395 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:14:34.395 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:14:34.395 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:14:34.395 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:14:34.654 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:14:34.654 [130/203] Linking target lib/libxnvme.so 00:14:34.654 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:14:34.654 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:14:34.654 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:14:34.654 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:14:34.654 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:14:34.654 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:14:34.654 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:14:34.654 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:14:34.654 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:14:34.654 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:14:34.654 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:14:34.913 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:14:34.913 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:14:34.913 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:14:34.913 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:14:34.913 [146/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:14:34.913 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:14:34.913 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:14:34.913 [149/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:14:35.172 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:14:35.172 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:14:35.172 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:14:35.172 [153/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:14:35.172 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:14:35.172 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:14:35.172 [156/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:14:35.172 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:14:35.172 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:14:35.172 [159/203] Compiling C object tools/xdd.p/xdd.c.o 00:14:35.172 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:14:35.431 [161/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:14:35.431 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:14:35.431 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:14:35.431 [164/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:14:35.431 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:14:35.431 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:14:35.431 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:14:35.431 [168/203] Compiling C object tools/lblk.p/lblk.c.o 00:14:35.689 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:14:35.689 [170/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:14:35.689 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:14:35.689 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:14:35.946 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:14:35.946 [174/203] Linking static target lib/libxnvme.a 00:14:35.946 [175/203] Linking target tests/xnvme_tests_async_intf 00:14:35.946 [176/203] Linking target tests/xnvme_tests_cli 00:14:35.946 [177/203] Linking target tests/xnvme_tests_xnvme_file 00:14:35.946 [178/203] Linking target tests/xnvme_tests_buf 00:14:35.946 [179/203] Linking target tests/xnvme_tests_lblk 00:14:35.946 [180/203] Linking target tests/xnvme_tests_enum 00:14:35.946 [181/203] Linking target tests/xnvme_tests_znd_append 00:14:35.946 [182/203] Linking target tests/xnvme_tests_scc 00:14:35.946 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:14:35.946 [184/203] Linking target tests/xnvme_tests_ioworker 00:14:35.946 [185/203] Linking target tests/xnvme_tests_znd_state 00:14:35.946 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:14:35.946 [187/203] Linking target tools/lblk 00:14:35.946 [188/203] Linking target tests/xnvme_tests_map 00:14:36.203 [189/203] Linking target tools/xdd 00:14:36.203 [190/203] Linking target tools/xnvme_file 00:14:36.203 [191/203] Linking target tools/xnvme 00:14:36.203 [192/203] Linking target tests/xnvme_tests_znd_zrwa 00:14:36.203 [193/203] Linking target tests/xnvme_tests_kvs 00:14:36.203 [194/203] Linking target tools/kvs 00:14:36.203 [195/203] Linking target examples/xnvme_io_async 00:14:36.203 [196/203] Linking target examples/zoned_io_async 00:14:36.203 [197/203] Linking target examples/xnvme_dev 00:14:36.203 [198/203] Linking target examples/xnvme_enum 00:14:36.203 [199/203] Linking target examples/xnvme_single_async 00:14:36.203 [200/203] Linking target examples/xnvme_single_sync 00:14:36.203 [201/203] Linking target tools/zoned 00:14:36.203 [202/203] Linking target examples/xnvme_hello 00:14:36.203 [203/203] Linking target examples/zoned_io_sync 00:14:36.203 INFO: autodetecting backend as ninja 00:14:36.203 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:14:36.203 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:14:46.168 The Meson build system 00:14:46.168 Version: 1.5.0 00:14:46.168 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:14:46.168 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:14:46.168 Build type: native build 00:14:46.168 Program cat found: YES (/usr/bin/cat) 00:14:46.168 Project name: DPDK 00:14:46.168 Project version: 24.03.0 00:14:46.168 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:14:46.168 C linker for the host machine: cc ld.bfd 2.40-14 00:14:46.168 Host machine cpu family: x86_64 00:14:46.168 Host machine cpu: x86_64 00:14:46.168 Message: ## Building in Developer Mode ## 00:14:46.168 Program pkg-config found: YES (/usr/bin/pkg-config) 00:14:46.168 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:14:46.168 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:14:46.168 Program python3 found: YES (/usr/bin/python3) 00:14:46.168 Program cat found: YES (/usr/bin/cat) 00:14:46.168 Compiler for C supports arguments -march=native: YES 00:14:46.168 Checking for size of "void *" : 8 00:14:46.168 Checking for size of "void *" : 8 (cached) 00:14:46.168 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:14:46.168 Library m found: YES 00:14:46.168 Library numa found: YES 00:14:46.168 Has header "numaif.h" : YES 00:14:46.168 Library fdt found: NO 00:14:46.168 Library execinfo found: NO 00:14:46.168 Has header "execinfo.h" : YES 00:14:46.168 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:14:46.168 Run-time dependency libarchive found: NO (tried pkgconfig) 00:14:46.168 Run-time dependency libbsd found: NO (tried pkgconfig) 00:14:46.168 Run-time dependency jansson found: NO (tried pkgconfig) 00:14:46.168 Run-time dependency openssl found: YES 3.1.1 00:14:46.168 Run-time dependency libpcap found: YES 1.10.4 00:14:46.168 Has header "pcap.h" with dependency libpcap: YES 00:14:46.168 Compiler for C supports arguments -Wcast-qual: YES 00:14:46.168 Compiler for C supports arguments -Wdeprecated: YES 00:14:46.168 Compiler for C supports arguments -Wformat: YES 00:14:46.168 Compiler for C supports arguments -Wformat-nonliteral: NO 00:14:46.168 Compiler for C supports arguments -Wformat-security: NO 00:14:46.168 Compiler for C supports arguments -Wmissing-declarations: YES 00:14:46.168 Compiler for C supports arguments -Wmissing-prototypes: YES 00:14:46.168 Compiler for C supports arguments -Wnested-externs: YES 00:14:46.168 Compiler for C supports arguments -Wold-style-definition: YES 00:14:46.168 Compiler for C supports arguments -Wpointer-arith: YES 00:14:46.168 Compiler for C supports arguments -Wsign-compare: YES 00:14:46.168 Compiler for C supports arguments -Wstrict-prototypes: YES 00:14:46.168 Compiler for C supports arguments -Wundef: YES 00:14:46.168 Compiler for C supports arguments -Wwrite-strings: YES 00:14:46.168 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:14:46.168 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:14:46.168 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:14:46.168 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:14:46.168 Program objdump found: YES (/usr/bin/objdump) 00:14:46.168 Compiler for C supports arguments -mavx512f: YES 00:14:46.168 Checking if "AVX512 checking" compiles: YES 00:14:46.168 Fetching value of define "__SSE4_2__" : 1 00:14:46.168 Fetching value of define "__AES__" : 1 00:14:46.168 Fetching value of define "__AVX__" : 1 00:14:46.168 Fetching value of define "__AVX2__" : 1 00:14:46.168 Fetching value of define "__AVX512BW__" : 1 00:14:46.168 Fetching value of define "__AVX512CD__" : 1 00:14:46.168 Fetching value of define "__AVX512DQ__" : 1 00:14:46.168 Fetching value of define "__AVX512F__" : 1 00:14:46.168 Fetching value of define "__AVX512VL__" : 1 00:14:46.168 Fetching value of define "__PCLMUL__" : 1 00:14:46.168 Fetching value of define "__RDRND__" : 1 00:14:46.168 Fetching value of define "__RDSEED__" : 1 00:14:46.168 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:14:46.168 Fetching value of define "__znver1__" : (undefined) 00:14:46.168 Fetching value of define "__znver2__" : (undefined) 00:14:46.168 Fetching value of define "__znver3__" : (undefined) 00:14:46.168 Fetching value of define "__znver4__" : (undefined) 00:14:46.169 Library asan found: YES 00:14:46.169 Compiler for C supports arguments -Wno-format-truncation: YES 00:14:46.169 Message: lib/log: Defining dependency "log" 00:14:46.169 Message: lib/kvargs: Defining dependency "kvargs" 00:14:46.169 Message: lib/telemetry: Defining dependency "telemetry" 00:14:46.169 Library rt found: YES 00:14:46.169 Checking for function "getentropy" : NO 00:14:46.169 Message: lib/eal: Defining dependency "eal" 00:14:46.169 Message: lib/ring: Defining dependency "ring" 00:14:46.169 Message: lib/rcu: Defining dependency "rcu" 00:14:46.169 Message: lib/mempool: Defining dependency "mempool" 00:14:46.169 Message: lib/mbuf: Defining dependency "mbuf" 00:14:46.169 Fetching value of define "__PCLMUL__" : 1 (cached) 00:14:46.169 Fetching value of define "__AVX512F__" : 1 (cached) 00:14:46.169 Fetching value of define "__AVX512BW__" : 1 (cached) 00:14:46.169 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:14:46.169 Fetching value of define "__AVX512VL__" : 1 (cached) 00:14:46.169 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:14:46.169 Compiler for C supports arguments -mpclmul: YES 00:14:46.169 Compiler for C supports arguments -maes: YES 00:14:46.169 Compiler for C supports arguments -mavx512f: YES (cached) 00:14:46.169 Compiler for C supports arguments -mavx512bw: YES 00:14:46.169 Compiler for C supports arguments -mavx512dq: YES 00:14:46.169 Compiler for C supports arguments -mavx512vl: YES 00:14:46.169 Compiler for C supports arguments -mvpclmulqdq: YES 00:14:46.169 Compiler for C supports arguments -mavx2: YES 00:14:46.169 Compiler for C supports arguments -mavx: YES 00:14:46.169 Message: lib/net: Defining dependency "net" 00:14:46.169 Message: lib/meter: Defining dependency "meter" 00:14:46.169 Message: lib/ethdev: Defining dependency "ethdev" 00:14:46.169 Message: lib/pci: Defining dependency "pci" 00:14:46.169 Message: lib/cmdline: Defining dependency "cmdline" 00:14:46.169 Message: lib/hash: Defining dependency "hash" 00:14:46.169 Message: lib/timer: Defining dependency "timer" 00:14:46.169 Message: lib/compressdev: Defining dependency "compressdev" 00:14:46.169 Message: lib/cryptodev: Defining dependency "cryptodev" 00:14:46.169 Message: lib/dmadev: Defining dependency "dmadev" 00:14:46.169 Compiler for C supports arguments -Wno-cast-qual: YES 00:14:46.169 Message: lib/power: Defining dependency "power" 00:14:46.169 Message: lib/reorder: Defining dependency "reorder" 00:14:46.169 Message: lib/security: Defining dependency "security" 00:14:46.169 Has header "linux/userfaultfd.h" : YES 00:14:46.169 Has header "linux/vduse.h" : YES 00:14:46.169 Message: lib/vhost: Defining dependency "vhost" 00:14:46.169 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:14:46.169 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:14:46.169 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:14:46.169 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:14:46.169 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:14:46.169 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:14:46.169 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:14:46.169 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:14:46.169 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:14:46.169 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:14:46.169 Program doxygen found: YES (/usr/local/bin/doxygen) 00:14:46.169 Configuring doxy-api-html.conf using configuration 00:14:46.169 Configuring doxy-api-man.conf using configuration 00:14:46.169 Program mandb found: YES (/usr/bin/mandb) 00:14:46.169 Program sphinx-build found: NO 00:14:46.169 Configuring rte_build_config.h using configuration 00:14:46.169 Message: 00:14:46.169 ================= 00:14:46.169 Applications Enabled 00:14:46.169 ================= 00:14:46.169 00:14:46.169 apps: 00:14:46.169 00:14:46.169 00:14:46.169 Message: 00:14:46.169 ================= 00:14:46.169 Libraries Enabled 00:14:46.169 ================= 00:14:46.169 00:14:46.169 libs: 00:14:46.169 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:14:46.169 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:14:46.169 cryptodev, dmadev, power, reorder, security, vhost, 00:14:46.169 00:14:46.169 Message: 00:14:46.169 =============== 00:14:46.169 Drivers Enabled 00:14:46.169 =============== 00:14:46.169 00:14:46.169 common: 00:14:46.169 00:14:46.169 bus: 00:14:46.169 pci, vdev, 00:14:46.169 mempool: 00:14:46.169 ring, 00:14:46.169 dma: 00:14:46.169 00:14:46.169 net: 00:14:46.169 00:14:46.169 crypto: 00:14:46.169 00:14:46.169 compress: 00:14:46.169 00:14:46.169 vdpa: 00:14:46.169 00:14:46.169 00:14:46.169 Message: 00:14:46.169 ================= 00:14:46.169 Content Skipped 00:14:46.169 ================= 00:14:46.169 00:14:46.169 apps: 00:14:46.169 dumpcap: explicitly disabled via build config 00:14:46.169 graph: explicitly disabled via build config 00:14:46.169 pdump: explicitly disabled via build config 00:14:46.169 proc-info: explicitly disabled via build config 00:14:46.169 test-acl: explicitly disabled via build config 00:14:46.169 test-bbdev: explicitly disabled via build config 00:14:46.169 test-cmdline: explicitly disabled via build config 00:14:46.169 test-compress-perf: explicitly disabled via build config 00:14:46.169 test-crypto-perf: explicitly disabled via build config 00:14:46.169 test-dma-perf: explicitly disabled via build config 00:14:46.169 test-eventdev: explicitly disabled via build config 00:14:46.169 test-fib: explicitly disabled via build config 00:14:46.169 test-flow-perf: explicitly disabled via build config 00:14:46.169 test-gpudev: explicitly disabled via build config 00:14:46.169 test-mldev: explicitly disabled via build config 00:14:46.169 test-pipeline: explicitly disabled via build config 00:14:46.169 test-pmd: explicitly disabled via build config 00:14:46.169 test-regex: explicitly disabled via build config 00:14:46.169 test-sad: explicitly disabled via build config 00:14:46.169 test-security-perf: explicitly disabled via build config 00:14:46.169 00:14:46.169 libs: 00:14:46.169 argparse: explicitly disabled via build config 00:14:46.169 metrics: explicitly disabled via build config 00:14:46.169 acl: explicitly disabled via build config 00:14:46.169 bbdev: explicitly disabled via build config 00:14:46.169 bitratestats: explicitly disabled via build config 00:14:46.169 bpf: explicitly disabled via build config 00:14:46.169 cfgfile: explicitly disabled via build config 00:14:46.169 distributor: explicitly disabled via build config 00:14:46.169 efd: explicitly disabled via build config 00:14:46.169 eventdev: explicitly disabled via build config 00:14:46.169 dispatcher: explicitly disabled via build config 00:14:46.169 gpudev: explicitly disabled via build config 00:14:46.169 gro: explicitly disabled via build config 00:14:46.169 gso: explicitly disabled via build config 00:14:46.169 ip_frag: explicitly disabled via build config 00:14:46.169 jobstats: explicitly disabled via build config 00:14:46.169 latencystats: explicitly disabled via build config 00:14:46.169 lpm: explicitly disabled via build config 00:14:46.169 member: explicitly disabled via build config 00:14:46.169 pcapng: explicitly disabled via build config 00:14:46.169 rawdev: explicitly disabled via build config 00:14:46.169 regexdev: explicitly disabled via build config 00:14:46.169 mldev: explicitly disabled via build config 00:14:46.169 rib: explicitly disabled via build config 00:14:46.169 sched: explicitly disabled via build config 00:14:46.169 stack: explicitly disabled via build config 00:14:46.169 ipsec: explicitly disabled via build config 00:14:46.169 pdcp: explicitly disabled via build config 00:14:46.169 fib: explicitly disabled via build config 00:14:46.169 port: explicitly disabled via build config 00:14:46.169 pdump: explicitly disabled via build config 00:14:46.169 table: explicitly disabled via build config 00:14:46.169 pipeline: explicitly disabled via build config 00:14:46.169 graph: explicitly disabled via build config 00:14:46.169 node: explicitly disabled via build config 00:14:46.169 00:14:46.169 drivers: 00:14:46.169 common/cpt: not in enabled drivers build config 00:14:46.169 common/dpaax: not in enabled drivers build config 00:14:46.169 common/iavf: not in enabled drivers build config 00:14:46.169 common/idpf: not in enabled drivers build config 00:14:46.169 common/ionic: not in enabled drivers build config 00:14:46.169 common/mvep: not in enabled drivers build config 00:14:46.169 common/octeontx: not in enabled drivers build config 00:14:46.169 bus/auxiliary: not in enabled drivers build config 00:14:46.169 bus/cdx: not in enabled drivers build config 00:14:46.169 bus/dpaa: not in enabled drivers build config 00:14:46.169 bus/fslmc: not in enabled drivers build config 00:14:46.169 bus/ifpga: not in enabled drivers build config 00:14:46.169 bus/platform: not in enabled drivers build config 00:14:46.169 bus/uacce: not in enabled drivers build config 00:14:46.169 bus/vmbus: not in enabled drivers build config 00:14:46.169 common/cnxk: not in enabled drivers build config 00:14:46.169 common/mlx5: not in enabled drivers build config 00:14:46.169 common/nfp: not in enabled drivers build config 00:14:46.169 common/nitrox: not in enabled drivers build config 00:14:46.169 common/qat: not in enabled drivers build config 00:14:46.169 common/sfc_efx: not in enabled drivers build config 00:14:46.169 mempool/bucket: not in enabled drivers build config 00:14:46.169 mempool/cnxk: not in enabled drivers build config 00:14:46.169 mempool/dpaa: not in enabled drivers build config 00:14:46.169 mempool/dpaa2: not in enabled drivers build config 00:14:46.169 mempool/octeontx: not in enabled drivers build config 00:14:46.169 mempool/stack: not in enabled drivers build config 00:14:46.169 dma/cnxk: not in enabled drivers build config 00:14:46.169 dma/dpaa: not in enabled drivers build config 00:14:46.169 dma/dpaa2: not in enabled drivers build config 00:14:46.169 dma/hisilicon: not in enabled drivers build config 00:14:46.169 dma/idxd: not in enabled drivers build config 00:14:46.169 dma/ioat: not in enabled drivers build config 00:14:46.169 dma/skeleton: not in enabled drivers build config 00:14:46.169 net/af_packet: not in enabled drivers build config 00:14:46.169 net/af_xdp: not in enabled drivers build config 00:14:46.169 net/ark: not in enabled drivers build config 00:14:46.169 net/atlantic: not in enabled drivers build config 00:14:46.169 net/avp: not in enabled drivers build config 00:14:46.170 net/axgbe: not in enabled drivers build config 00:14:46.170 net/bnx2x: not in enabled drivers build config 00:14:46.170 net/bnxt: not in enabled drivers build config 00:14:46.170 net/bonding: not in enabled drivers build config 00:14:46.170 net/cnxk: not in enabled drivers build config 00:14:46.170 net/cpfl: not in enabled drivers build config 00:14:46.170 net/cxgbe: not in enabled drivers build config 00:14:46.170 net/dpaa: not in enabled drivers build config 00:14:46.170 net/dpaa2: not in enabled drivers build config 00:14:46.170 net/e1000: not in enabled drivers build config 00:14:46.170 net/ena: not in enabled drivers build config 00:14:46.170 net/enetc: not in enabled drivers build config 00:14:46.170 net/enetfec: not in enabled drivers build config 00:14:46.170 net/enic: not in enabled drivers build config 00:14:46.170 net/failsafe: not in enabled drivers build config 00:14:46.170 net/fm10k: not in enabled drivers build config 00:14:46.170 net/gve: not in enabled drivers build config 00:14:46.170 net/hinic: not in enabled drivers build config 00:14:46.170 net/hns3: not in enabled drivers build config 00:14:46.170 net/i40e: not in enabled drivers build config 00:14:46.170 net/iavf: not in enabled drivers build config 00:14:46.170 net/ice: not in enabled drivers build config 00:14:46.170 net/idpf: not in enabled drivers build config 00:14:46.170 net/igc: not in enabled drivers build config 00:14:46.170 net/ionic: not in enabled drivers build config 00:14:46.170 net/ipn3ke: not in enabled drivers build config 00:14:46.170 net/ixgbe: not in enabled drivers build config 00:14:46.170 net/mana: not in enabled drivers build config 00:14:46.170 net/memif: not in enabled drivers build config 00:14:46.170 net/mlx4: not in enabled drivers build config 00:14:46.170 net/mlx5: not in enabled drivers build config 00:14:46.170 net/mvneta: not in enabled drivers build config 00:14:46.170 net/mvpp2: not in enabled drivers build config 00:14:46.170 net/netvsc: not in enabled drivers build config 00:14:46.170 net/nfb: not in enabled drivers build config 00:14:46.170 net/nfp: not in enabled drivers build config 00:14:46.170 net/ngbe: not in enabled drivers build config 00:14:46.170 net/null: not in enabled drivers build config 00:14:46.170 net/octeontx: not in enabled drivers build config 00:14:46.170 net/octeon_ep: not in enabled drivers build config 00:14:46.170 net/pcap: not in enabled drivers build config 00:14:46.170 net/pfe: not in enabled drivers build config 00:14:46.170 net/qede: not in enabled drivers build config 00:14:46.170 net/ring: not in enabled drivers build config 00:14:46.170 net/sfc: not in enabled drivers build config 00:14:46.170 net/softnic: not in enabled drivers build config 00:14:46.170 net/tap: not in enabled drivers build config 00:14:46.170 net/thunderx: not in enabled drivers build config 00:14:46.170 net/txgbe: not in enabled drivers build config 00:14:46.170 net/vdev_netvsc: not in enabled drivers build config 00:14:46.170 net/vhost: not in enabled drivers build config 00:14:46.170 net/virtio: not in enabled drivers build config 00:14:46.170 net/vmxnet3: not in enabled drivers build config 00:14:46.170 raw/*: missing internal dependency, "rawdev" 00:14:46.170 crypto/armv8: not in enabled drivers build config 00:14:46.170 crypto/bcmfs: not in enabled drivers build config 00:14:46.170 crypto/caam_jr: not in enabled drivers build config 00:14:46.170 crypto/ccp: not in enabled drivers build config 00:14:46.170 crypto/cnxk: not in enabled drivers build config 00:14:46.170 crypto/dpaa_sec: not in enabled drivers build config 00:14:46.170 crypto/dpaa2_sec: not in enabled drivers build config 00:14:46.170 crypto/ipsec_mb: not in enabled drivers build config 00:14:46.170 crypto/mlx5: not in enabled drivers build config 00:14:46.170 crypto/mvsam: not in enabled drivers build config 00:14:46.170 crypto/nitrox: not in enabled drivers build config 00:14:46.170 crypto/null: not in enabled drivers build config 00:14:46.170 crypto/octeontx: not in enabled drivers build config 00:14:46.170 crypto/openssl: not in enabled drivers build config 00:14:46.170 crypto/scheduler: not in enabled drivers build config 00:14:46.170 crypto/uadk: not in enabled drivers build config 00:14:46.170 crypto/virtio: not in enabled drivers build config 00:14:46.170 compress/isal: not in enabled drivers build config 00:14:46.170 compress/mlx5: not in enabled drivers build config 00:14:46.170 compress/nitrox: not in enabled drivers build config 00:14:46.170 compress/octeontx: not in enabled drivers build config 00:14:46.170 compress/zlib: not in enabled drivers build config 00:14:46.170 regex/*: missing internal dependency, "regexdev" 00:14:46.170 ml/*: missing internal dependency, "mldev" 00:14:46.170 vdpa/ifc: not in enabled drivers build config 00:14:46.170 vdpa/mlx5: not in enabled drivers build config 00:14:46.170 vdpa/nfp: not in enabled drivers build config 00:14:46.170 vdpa/sfc: not in enabled drivers build config 00:14:46.170 event/*: missing internal dependency, "eventdev" 00:14:46.170 baseband/*: missing internal dependency, "bbdev" 00:14:46.170 gpu/*: missing internal dependency, "gpudev" 00:14:46.170 00:14:46.170 00:14:46.170 Build targets in project: 85 00:14:46.170 00:14:46.170 DPDK 24.03.0 00:14:46.170 00:14:46.170 User defined options 00:14:46.170 buildtype : debug 00:14:46.170 default_library : shared 00:14:46.170 libdir : lib 00:14:46.170 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:46.170 b_sanitize : address 00:14:46.170 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:14:46.170 c_link_args : 00:14:46.170 cpu_instruction_set: native 00:14:46.170 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:14:46.170 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:14:46.170 enable_docs : false 00:14:46.170 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:14:46.170 enable_kmods : false 00:14:46.170 max_lcores : 128 00:14:46.170 tests : false 00:14:46.170 00:14:46.170 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:14:46.170 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:14:46.170 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:14:46.170 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:14:46.170 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:14:46.170 [4/268] Linking static target lib/librte_kvargs.a 00:14:46.170 [5/268] Linking static target lib/librte_log.a 00:14:46.428 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:14:46.686 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:14:46.686 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:14:46.944 [9/268] Linking static target lib/librte_telemetry.a 00:14:46.944 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:14:46.944 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:14:46.944 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:14:46.944 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:14:46.944 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:14:46.944 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:14:46.944 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:14:46.944 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:14:47.203 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:14:47.461 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:14:47.461 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:14:47.720 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:14:47.720 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:14:47.720 [23/268] Linking target lib/librte_log.so.24.1 00:14:47.720 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:14:47.720 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:14:47.720 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:14:47.979 [27/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:14:47.979 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:14:47.979 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:14:47.979 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:14:48.238 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:14:48.238 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:14:48.238 [33/268] Linking target lib/librte_kvargs.so.24.1 00:14:48.238 [34/268] Linking target lib/librte_telemetry.so.24.1 00:14:48.238 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:14:48.238 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:14:48.497 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:14:48.497 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:14:48.497 [39/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:14:48.497 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:14:48.497 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:14:48.497 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:14:48.497 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:14:48.757 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:14:48.757 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:14:49.019 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:14:49.019 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:14:49.019 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:14:49.019 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:14:49.342 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:14:49.343 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:14:49.343 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:14:49.343 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:14:49.343 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:14:49.343 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:14:49.600 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:14:49.600 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:14:49.857 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:14:49.857 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:14:49.857 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:14:49.857 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:14:49.857 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:14:50.115 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:14:50.115 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:14:50.115 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:14:50.115 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:14:50.375 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:14:50.634 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:14:50.634 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:14:50.634 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:14:50.634 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:14:50.634 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:14:50.634 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:14:50.634 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:14:50.892 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:14:50.892 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:14:50.892 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:14:50.892 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:14:51.151 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:14:51.151 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:14:51.151 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:14:51.151 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:14:51.410 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:14:51.672 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:14:51.672 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:14:51.672 [86/268] Linking static target lib/librte_eal.a 00:14:51.672 [87/268] Linking static target lib/librte_ring.a 00:14:51.672 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:14:51.935 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:14:51.935 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:14:51.935 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:14:52.211 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:14:52.211 [93/268] Linking static target lib/librte_mempool.a 00:14:52.211 [94/268] Linking static target lib/librte_rcu.a 00:14:52.211 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:14:52.211 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:14:52.470 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:14:52.470 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:14:52.729 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:14:52.729 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:14:52.729 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:14:52.729 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:14:52.729 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:14:52.998 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:14:52.998 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:14:52.998 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:14:52.998 [107/268] Linking static target lib/librte_net.a 00:14:53.298 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:14:53.298 [109/268] Linking static target lib/librte_meter.a 00:14:53.298 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:14:53.556 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:14:53.556 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:14:53.556 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:14:53.556 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:14:53.556 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:14:53.556 [116/268] Linking static target lib/librte_mbuf.a 00:14:53.815 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:14:53.815 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:14:54.073 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:14:54.332 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:14:54.332 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:14:54.590 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:14:54.590 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:14:54.848 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:14:54.848 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:14:54.848 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:14:55.106 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:14:55.106 [128/268] Linking static target lib/librte_pci.a 00:14:55.106 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:14:55.106 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:14:55.106 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:14:55.106 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:14:55.364 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:14:55.364 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:14:55.364 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:14:55.364 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:14:55.364 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:14:55.364 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:14:55.364 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:14:55.364 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:14:55.365 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:14:55.365 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:14:55.623 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:14:55.623 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:14:55.623 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:14:55.623 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:14:55.881 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:14:55.881 [148/268] Linking static target lib/librte_cmdline.a 00:14:56.138 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:14:56.138 [150/268] Linking static target lib/librte_timer.a 00:14:56.138 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:14:56.138 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:14:56.138 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:14:56.138 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:14:56.138 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:14:56.703 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:14:56.703 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:14:56.704 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:14:56.704 [159/268] Linking static target lib/librte_compressdev.a 00:14:56.961 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:14:56.961 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:14:56.961 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:14:56.961 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:14:57.219 [164/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:14:57.219 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:14:57.219 [166/268] Linking static target lib/librte_ethdev.a 00:14:57.219 [167/268] Linking static target lib/librte_dmadev.a 00:14:57.219 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:14:57.219 [169/268] Linking static target lib/librte_hash.a 00:14:57.477 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:14:57.477 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:14:57.477 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:14:57.735 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:14:57.735 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:14:57.994 [175/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:14:57.994 [176/268] Linking static target lib/librte_cryptodev.a 00:14:57.994 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:57.994 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:14:58.253 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:14:58.253 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:14:58.253 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:58.253 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:14:58.511 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:14:58.511 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:14:58.511 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:14:58.511 [186/268] Linking static target lib/librte_power.a 00:14:58.845 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:14:59.104 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:14:59.104 [189/268] Linking static target lib/librte_reorder.a 00:14:59.104 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:14:59.104 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:14:59.104 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:14:59.104 [193/268] Linking static target lib/librte_security.a 00:14:59.673 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:14:59.930 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:15:00.188 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:15:00.188 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:00.188 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:00.447 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:15:00.447 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:15:00.447 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:15:00.705 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:00.964 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:00.964 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:15:00.964 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:00.964 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:15:01.223 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:01.223 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:01.223 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:01.223 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:15:01.223 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:01.481 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:01.481 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:01.481 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:01.481 [215/268] Linking static target drivers/librte_bus_vdev.a 00:15:01.481 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:01.740 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:01.740 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:01.740 [219/268] Linking static target drivers/librte_bus_pci.a 00:15:01.740 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:01.740 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:01.999 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:01.999 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:01.999 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:01.999 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:01.999 [226/268] Linking static target drivers/librte_mempool_ring.a 00:15:02.257 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:02.836 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:15:04.736 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:04.995 [230/268] Linking target lib/librte_eal.so.24.1 00:15:04.995 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:15:04.995 [232/268] Linking target lib/librte_meter.so.24.1 00:15:04.995 [233/268] Linking target lib/librte_pci.so.24.1 00:15:04.995 [234/268] Linking target lib/librte_dmadev.so.24.1 00:15:04.995 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:15:04.995 [236/268] Linking target lib/librte_ring.so.24.1 00:15:05.253 [237/268] Linking target lib/librte_timer.so.24.1 00:15:05.253 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:15:05.253 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:15:05.253 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:15:05.253 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:15:05.253 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:15:05.253 [243/268] Linking target lib/librte_rcu.so.24.1 00:15:05.253 [244/268] Linking target lib/librte_mempool.so.24.1 00:15:05.511 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:15:05.511 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:15:05.511 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:15:05.511 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:15:05.511 [249/268] Linking target lib/librte_mbuf.so.24.1 00:15:05.769 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:15:05.769 [251/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:05.769 [252/268] Linking target lib/librte_reorder.so.24.1 00:15:05.769 [253/268] Linking target lib/librte_net.so.24.1 00:15:05.769 [254/268] Linking target lib/librte_compressdev.so.24.1 00:15:05.769 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:15:06.028 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:15:06.028 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:15:06.028 [258/268] Linking target lib/librte_hash.so.24.1 00:15:06.028 [259/268] Linking target lib/librte_cmdline.so.24.1 00:15:06.028 [260/268] Linking target lib/librte_security.so.24.1 00:15:06.028 [261/268] Linking target lib/librte_ethdev.so.24.1 00:15:06.286 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:15:06.287 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:15:06.287 [264/268] Linking target lib/librte_power.so.24.1 00:15:07.712 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:15:07.970 [266/268] Linking static target lib/librte_vhost.a 00:15:09.342 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:15:09.342 [268/268] Linking target lib/librte_vhost.so.24.1 00:15:09.342 INFO: autodetecting backend as ninja 00:15:09.342 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:15:31.287 CC lib/log/log.o 00:15:31.287 CC lib/log/log_flags.o 00:15:31.287 CC lib/log/log_deprecated.o 00:15:31.287 CC lib/ut/ut.o 00:15:31.287 CC lib/ut_mock/mock.o 00:15:31.287 LIB libspdk_log.a 00:15:31.287 LIB libspdk_ut_mock.a 00:15:31.287 SO libspdk_log.so.7.0 00:15:31.287 SO libspdk_ut_mock.so.6.0 00:15:31.287 LIB libspdk_ut.a 00:15:31.287 SYMLINK libspdk_log.so 00:15:31.287 SYMLINK libspdk_ut_mock.so 00:15:31.287 SO libspdk_ut.so.2.0 00:15:31.287 SYMLINK libspdk_ut.so 00:15:31.544 CC lib/util/base64.o 00:15:31.544 CC lib/util/bit_array.o 00:15:31.544 CC lib/dma/dma.o 00:15:31.544 CC lib/util/cpuset.o 00:15:31.544 CC lib/util/crc32.o 00:15:31.544 CC lib/util/crc16.o 00:15:31.544 CC lib/ioat/ioat.o 00:15:31.544 CC lib/util/crc32c.o 00:15:31.544 CXX lib/trace_parser/trace.o 00:15:31.804 CC lib/util/crc32_ieee.o 00:15:31.804 CC lib/util/crc64.o 00:15:31.804 CC lib/vfio_user/host/vfio_user_pci.o 00:15:31.804 LIB libspdk_dma.a 00:15:31.804 CC lib/util/dif.o 00:15:31.804 CC lib/util/fd.o 00:15:31.804 SO libspdk_dma.so.5.0 00:15:31.804 CC lib/vfio_user/host/vfio_user.o 00:15:31.804 CC lib/util/fd_group.o 00:15:31.804 SYMLINK libspdk_dma.so 00:15:31.804 CC lib/util/file.o 00:15:32.070 CC lib/util/hexlify.o 00:15:32.070 CC lib/util/iov.o 00:15:32.070 LIB libspdk_ioat.a 00:15:32.070 SO libspdk_ioat.so.7.0 00:15:32.070 CC lib/util/math.o 00:15:32.070 CC lib/util/net.o 00:15:32.070 CC lib/util/pipe.o 00:15:32.070 CC lib/util/strerror_tls.o 00:15:32.070 CC lib/util/string.o 00:15:32.352 CC lib/util/uuid.o 00:15:32.352 SYMLINK libspdk_ioat.so 00:15:32.352 CC lib/util/xor.o 00:15:32.352 LIB libspdk_vfio_user.a 00:15:32.352 SO libspdk_vfio_user.so.5.0 00:15:32.352 CC lib/util/zipf.o 00:15:32.352 CC lib/util/md5.o 00:15:32.352 SYMLINK libspdk_vfio_user.so 00:15:32.619 LIB libspdk_util.a 00:15:32.890 SO libspdk_util.so.10.1 00:15:32.890 LIB libspdk_trace_parser.a 00:15:32.890 SO libspdk_trace_parser.so.6.0 00:15:32.890 SYMLINK libspdk_util.so 00:15:32.890 SYMLINK libspdk_trace_parser.so 00:15:33.164 CC lib/rdma_provider/common.o 00:15:33.164 CC lib/rdma_provider/rdma_provider_verbs.o 00:15:33.164 CC lib/rdma_utils/rdma_utils.o 00:15:33.164 CC lib/json/json_parse.o 00:15:33.164 CC lib/json/json_util.o 00:15:33.164 CC lib/json/json_write.o 00:15:33.164 CC lib/conf/conf.o 00:15:33.164 CC lib/env_dpdk/env.o 00:15:33.164 CC lib/vmd/vmd.o 00:15:33.164 CC lib/idxd/idxd.o 00:15:33.440 CC lib/idxd/idxd_user.o 00:15:33.440 LIB libspdk_rdma_provider.a 00:15:33.440 CC lib/idxd/idxd_kernel.o 00:15:33.440 SO libspdk_rdma_provider.so.6.0 00:15:33.440 CC lib/vmd/led.o 00:15:33.440 LIB libspdk_conf.a 00:15:33.440 LIB libspdk_json.a 00:15:33.440 LIB libspdk_rdma_utils.a 00:15:33.440 SO libspdk_conf.so.6.0 00:15:33.440 SYMLINK libspdk_rdma_provider.so 00:15:33.440 SO libspdk_json.so.6.0 00:15:33.440 SO libspdk_rdma_utils.so.1.0 00:15:33.440 CC lib/env_dpdk/memory.o 00:15:33.702 SYMLINK libspdk_conf.so 00:15:33.702 SYMLINK libspdk_rdma_utils.so 00:15:33.702 CC lib/env_dpdk/pci.o 00:15:33.702 CC lib/env_dpdk/init.o 00:15:33.702 CC lib/env_dpdk/threads.o 00:15:33.702 CC lib/env_dpdk/pci_ioat.o 00:15:33.702 SYMLINK libspdk_json.so 00:15:33.702 CC lib/env_dpdk/pci_virtio.o 00:15:33.702 CC lib/env_dpdk/pci_vmd.o 00:15:33.702 CC lib/env_dpdk/pci_idxd.o 00:15:33.702 CC lib/env_dpdk/pci_event.o 00:15:33.702 CC lib/jsonrpc/jsonrpc_server.o 00:15:33.960 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:15:33.960 CC lib/env_dpdk/sigbus_handler.o 00:15:33.960 CC lib/env_dpdk/pci_dpdk.o 00:15:33.960 LIB libspdk_idxd.a 00:15:33.960 SO libspdk_idxd.so.12.1 00:15:33.960 CC lib/env_dpdk/pci_dpdk_2207.o 00:15:33.960 CC lib/env_dpdk/pci_dpdk_2211.o 00:15:33.960 CC lib/jsonrpc/jsonrpc_client.o 00:15:34.218 LIB libspdk_vmd.a 00:15:34.218 SYMLINK libspdk_idxd.so 00:15:34.218 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:15:34.218 SO libspdk_vmd.so.6.0 00:15:34.218 SYMLINK libspdk_vmd.so 00:15:34.477 LIB libspdk_jsonrpc.a 00:15:34.477 SO libspdk_jsonrpc.so.6.0 00:15:34.477 SYMLINK libspdk_jsonrpc.so 00:15:34.735 CC lib/rpc/rpc.o 00:15:34.996 LIB libspdk_env_dpdk.a 00:15:34.996 LIB libspdk_rpc.a 00:15:34.996 SO libspdk_rpc.so.6.0 00:15:35.254 SYMLINK libspdk_rpc.so 00:15:35.254 SO libspdk_env_dpdk.so.15.1 00:15:35.254 SYMLINK libspdk_env_dpdk.so 00:15:35.512 CC lib/trace/trace_rpc.o 00:15:35.512 CC lib/trace/trace.o 00:15:35.512 CC lib/trace/trace_flags.o 00:15:35.512 CC lib/keyring/keyring_rpc.o 00:15:35.512 CC lib/keyring/keyring.o 00:15:35.512 CC lib/notify/notify.o 00:15:35.512 CC lib/notify/notify_rpc.o 00:15:35.512 LIB libspdk_notify.a 00:15:35.770 SO libspdk_notify.so.6.0 00:15:35.770 LIB libspdk_keyring.a 00:15:35.770 SYMLINK libspdk_notify.so 00:15:35.770 SO libspdk_keyring.so.2.0 00:15:35.770 LIB libspdk_trace.a 00:15:35.770 SYMLINK libspdk_keyring.so 00:15:35.770 SO libspdk_trace.so.11.0 00:15:36.028 SYMLINK libspdk_trace.so 00:15:36.287 CC lib/thread/thread.o 00:15:36.287 CC lib/thread/iobuf.o 00:15:36.287 CC lib/sock/sock_rpc.o 00:15:36.287 CC lib/sock/sock.o 00:15:36.854 LIB libspdk_sock.a 00:15:36.854 SO libspdk_sock.so.10.0 00:15:36.854 SYMLINK libspdk_sock.so 00:15:37.111 CC lib/nvme/nvme_ctrlr_cmd.o 00:15:37.111 CC lib/nvme/nvme_ctrlr.o 00:15:37.111 CC lib/nvme/nvme_ns_cmd.o 00:15:37.111 CC lib/nvme/nvme_ns.o 00:15:37.111 CC lib/nvme/nvme_fabric.o 00:15:37.111 CC lib/nvme/nvme_qpair.o 00:15:37.111 CC lib/nvme/nvme_pcie.o 00:15:37.111 CC lib/nvme/nvme_pcie_common.o 00:15:37.111 CC lib/nvme/nvme.o 00:15:38.045 CC lib/nvme/nvme_quirks.o 00:15:38.045 CC lib/nvme/nvme_transport.o 00:15:38.045 CC lib/nvme/nvme_discovery.o 00:15:38.302 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:15:38.302 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:15:38.302 LIB libspdk_thread.a 00:15:38.302 SO libspdk_thread.so.10.2 00:15:38.302 CC lib/nvme/nvme_tcp.o 00:15:38.302 SYMLINK libspdk_thread.so 00:15:38.302 CC lib/nvme/nvme_opal.o 00:15:38.561 CC lib/nvme/nvme_io_msg.o 00:15:38.818 CC lib/nvme/nvme_poll_group.o 00:15:38.818 CC lib/accel/accel.o 00:15:38.818 CC lib/accel/accel_rpc.o 00:15:38.818 CC lib/blob/blobstore.o 00:15:38.818 CC lib/blob/request.o 00:15:39.077 CC lib/accel/accel_sw.o 00:15:39.077 CC lib/blob/zeroes.o 00:15:39.336 CC lib/blob/blob_bs_dev.o 00:15:39.336 CC lib/init/json_config.o 00:15:39.336 CC lib/virtio/virtio.o 00:15:39.595 CC lib/nvme/nvme_zns.o 00:15:39.595 CC lib/nvme/nvme_stubs.o 00:15:39.595 CC lib/init/subsystem.o 00:15:39.595 CC lib/fsdev/fsdev.o 00:15:39.595 CC lib/fsdev/fsdev_io.o 00:15:39.851 CC lib/virtio/virtio_vhost_user.o 00:15:39.851 CC lib/init/subsystem_rpc.o 00:15:39.851 CC lib/init/rpc.o 00:15:40.110 CC lib/virtio/virtio_vfio_user.o 00:15:40.110 CC lib/fsdev/fsdev_rpc.o 00:15:40.110 LIB libspdk_init.a 00:15:40.110 SO libspdk_init.so.6.0 00:15:40.110 LIB libspdk_accel.a 00:15:40.369 CC lib/nvme/nvme_auth.o 00:15:40.369 SO libspdk_accel.so.16.0 00:15:40.369 CC lib/nvme/nvme_cuse.o 00:15:40.369 CC lib/virtio/virtio_pci.o 00:15:40.369 SYMLINK libspdk_init.so 00:15:40.369 SYMLINK libspdk_accel.so 00:15:40.369 CC lib/nvme/nvme_rdma.o 00:15:40.627 LIB libspdk_fsdev.a 00:15:40.627 CC lib/event/app.o 00:15:40.627 CC lib/event/reactor.o 00:15:40.627 CC lib/event/log_rpc.o 00:15:40.627 CC lib/bdev/bdev.o 00:15:40.627 SO libspdk_fsdev.so.1.0 00:15:40.949 LIB libspdk_virtio.a 00:15:40.949 SO libspdk_virtio.so.7.0 00:15:40.949 SYMLINK libspdk_fsdev.so 00:15:40.949 CC lib/event/app_rpc.o 00:15:40.949 SYMLINK libspdk_virtio.so 00:15:40.949 CC lib/event/scheduler_static.o 00:15:41.226 CC lib/bdev/bdev_rpc.o 00:15:41.226 CC lib/bdev/bdev_zone.o 00:15:41.226 CC lib/bdev/part.o 00:15:41.226 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:15:41.484 LIB libspdk_event.a 00:15:41.484 CC lib/bdev/scsi_nvme.o 00:15:41.484 SO libspdk_event.so.15.0 00:15:41.484 SYMLINK libspdk_event.so 00:15:42.418 LIB libspdk_nvme.a 00:15:42.418 LIB libspdk_fuse_dispatcher.a 00:15:42.418 SO libspdk_fuse_dispatcher.so.1.0 00:15:42.418 SYMLINK libspdk_fuse_dispatcher.so 00:15:42.418 SO libspdk_nvme.so.15.0 00:15:42.984 SYMLINK libspdk_nvme.so 00:15:43.242 LIB libspdk_blob.a 00:15:43.500 SO libspdk_blob.so.11.0 00:15:43.500 SYMLINK libspdk_blob.so 00:15:43.758 CC lib/blobfs/blobfs.o 00:15:43.758 CC lib/blobfs/tree.o 00:15:43.758 CC lib/lvol/lvol.o 00:15:44.322 LIB libspdk_bdev.a 00:15:44.322 SO libspdk_bdev.so.17.0 00:15:44.581 SYMLINK libspdk_bdev.so 00:15:44.839 CC lib/nvmf/ctrlr.o 00:15:44.839 CC lib/nvmf/ctrlr_discovery.o 00:15:44.839 CC lib/nvmf/ctrlr_bdev.o 00:15:44.839 CC lib/nvmf/subsystem.o 00:15:44.839 CC lib/scsi/dev.o 00:15:44.839 CC lib/ublk/ublk.o 00:15:44.839 CC lib/nbd/nbd.o 00:15:44.839 CC lib/ftl/ftl_core.o 00:15:45.405 LIB libspdk_blobfs.a 00:15:45.405 LIB libspdk_lvol.a 00:15:45.405 SO libspdk_blobfs.so.10.0 00:15:45.405 CC lib/scsi/lun.o 00:15:45.405 SO libspdk_lvol.so.10.0 00:15:45.405 SYMLINK libspdk_blobfs.so 00:15:45.405 CC lib/nbd/nbd_rpc.o 00:15:45.405 CC lib/ftl/ftl_init.o 00:15:45.405 SYMLINK libspdk_lvol.so 00:15:45.405 CC lib/ublk/ublk_rpc.o 00:15:45.405 CC lib/ftl/ftl_layout.o 00:15:45.662 CC lib/ftl/ftl_debug.o 00:15:45.662 CC lib/ftl/ftl_io.o 00:15:45.662 LIB libspdk_nbd.a 00:15:45.662 SO libspdk_nbd.so.7.0 00:15:45.921 SYMLINK libspdk_nbd.so 00:15:45.921 CC lib/nvmf/nvmf.o 00:15:45.921 CC lib/nvmf/nvmf_rpc.o 00:15:45.921 CC lib/scsi/port.o 00:15:45.921 CC lib/nvmf/transport.o 00:15:45.921 CC lib/nvmf/tcp.o 00:15:45.921 CC lib/ftl/ftl_sb.o 00:15:46.181 CC lib/scsi/scsi.o 00:15:46.181 LIB libspdk_ublk.a 00:15:46.181 CC lib/ftl/ftl_l2p.o 00:15:46.181 CC lib/scsi/scsi_bdev.o 00:15:46.181 SO libspdk_ublk.so.3.0 00:15:46.439 SYMLINK libspdk_ublk.so 00:15:46.439 CC lib/ftl/ftl_l2p_flat.o 00:15:46.439 CC lib/nvmf/stubs.o 00:15:46.439 CC lib/ftl/ftl_nv_cache.o 00:15:46.698 CC lib/ftl/ftl_band.o 00:15:46.698 CC lib/ftl/ftl_band_ops.o 00:15:47.055 CC lib/scsi/scsi_pr.o 00:15:47.055 CC lib/scsi/scsi_rpc.o 00:15:47.055 CC lib/nvmf/mdns_server.o 00:15:47.313 CC lib/nvmf/rdma.o 00:15:47.313 CC lib/ftl/ftl_writer.o 00:15:47.313 CC lib/ftl/ftl_rq.o 00:15:47.313 CC lib/scsi/task.o 00:15:47.313 CC lib/nvmf/auth.o 00:15:47.313 CC lib/ftl/ftl_reloc.o 00:15:47.313 CC lib/ftl/ftl_l2p_cache.o 00:15:47.572 CC lib/ftl/ftl_p2l.o 00:15:47.572 CC lib/ftl/ftl_p2l_log.o 00:15:47.573 CC lib/ftl/mngt/ftl_mngt.o 00:15:47.573 LIB libspdk_scsi.a 00:15:47.833 SO libspdk_scsi.so.9.0 00:15:47.833 SYMLINK libspdk_scsi.so 00:15:47.833 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:15:48.092 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:15:48.092 CC lib/ftl/mngt/ftl_mngt_startup.o 00:15:48.092 CC lib/ftl/mngt/ftl_mngt_md.o 00:15:48.092 CC lib/ftl/mngt/ftl_mngt_misc.o 00:15:48.092 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:15:48.351 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:15:48.351 CC lib/ftl/mngt/ftl_mngt_band.o 00:15:48.351 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:15:48.351 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:15:48.351 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:15:48.351 CC lib/iscsi/conn.o 00:15:48.614 CC lib/iscsi/init_grp.o 00:15:48.614 CC lib/iscsi/iscsi.o 00:15:48.614 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:15:48.614 CC lib/iscsi/param.o 00:15:48.874 CC lib/iscsi/portal_grp.o 00:15:48.874 CC lib/iscsi/tgt_node.o 00:15:48.874 CC lib/vhost/vhost.o 00:15:48.874 CC lib/vhost/vhost_rpc.o 00:15:48.874 CC lib/vhost/vhost_scsi.o 00:15:48.874 CC lib/ftl/utils/ftl_conf.o 00:15:49.132 CC lib/iscsi/iscsi_subsystem.o 00:15:49.132 CC lib/ftl/utils/ftl_md.o 00:15:49.133 CC lib/ftl/utils/ftl_mempool.o 00:15:49.391 CC lib/ftl/utils/ftl_bitmap.o 00:15:49.391 CC lib/vhost/vhost_blk.o 00:15:49.391 CC lib/vhost/rte_vhost_user.o 00:15:49.649 CC lib/iscsi/iscsi_rpc.o 00:15:49.649 CC lib/iscsi/task.o 00:15:49.649 CC lib/ftl/utils/ftl_property.o 00:15:49.649 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:15:49.907 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:15:49.907 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:15:49.907 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:15:49.907 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:15:50.166 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:15:50.166 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:15:50.166 CC lib/ftl/upgrade/ftl_sb_v3.o 00:15:50.166 CC lib/ftl/upgrade/ftl_sb_v5.o 00:15:50.166 CC lib/ftl/nvc/ftl_nvc_dev.o 00:15:50.166 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:15:50.166 LIB libspdk_nvmf.a 00:15:50.424 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:15:50.424 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:15:50.424 CC lib/ftl/base/ftl_base_dev.o 00:15:50.424 CC lib/ftl/base/ftl_base_bdev.o 00:15:50.682 SO libspdk_nvmf.so.19.0 00:15:50.682 CC lib/ftl/ftl_trace.o 00:15:50.682 LIB libspdk_iscsi.a 00:15:50.941 SYMLINK libspdk_nvmf.so 00:15:50.941 SO libspdk_iscsi.so.8.0 00:15:50.941 LIB libspdk_ftl.a 00:15:51.200 SYMLINK libspdk_iscsi.so 00:15:51.200 LIB libspdk_vhost.a 00:15:51.200 SO libspdk_vhost.so.8.0 00:15:51.200 SO libspdk_ftl.so.9.0 00:15:51.459 SYMLINK libspdk_vhost.so 00:15:51.459 SYMLINK libspdk_ftl.so 00:15:52.025 CC module/env_dpdk/env_dpdk_rpc.o 00:15:52.025 CC module/accel/iaa/accel_iaa.o 00:15:52.025 CC module/fsdev/aio/fsdev_aio.o 00:15:52.025 CC module/scheduler/dynamic/scheduler_dynamic.o 00:15:52.025 CC module/accel/error/accel_error.o 00:15:52.025 CC module/accel/ioat/accel_ioat.o 00:15:52.025 CC module/sock/posix/posix.o 00:15:52.025 CC module/accel/dsa/accel_dsa.o 00:15:52.025 CC module/keyring/file/keyring.o 00:15:52.025 CC module/blob/bdev/blob_bdev.o 00:15:52.282 LIB libspdk_env_dpdk_rpc.a 00:15:52.282 SO libspdk_env_dpdk_rpc.so.6.0 00:15:52.282 SYMLINK libspdk_env_dpdk_rpc.so 00:15:52.282 CC module/keyring/file/keyring_rpc.o 00:15:52.282 CC module/accel/ioat/accel_ioat_rpc.o 00:15:52.282 CC module/accel/error/accel_error_rpc.o 00:15:52.282 CC module/fsdev/aio/fsdev_aio_rpc.o 00:15:52.282 LIB libspdk_scheduler_dynamic.a 00:15:52.282 CC module/accel/iaa/accel_iaa_rpc.o 00:15:52.539 SO libspdk_scheduler_dynamic.so.4.0 00:15:52.539 CC module/accel/dsa/accel_dsa_rpc.o 00:15:52.539 LIB libspdk_blob_bdev.a 00:15:52.539 LIB libspdk_keyring_file.a 00:15:52.539 SYMLINK libspdk_scheduler_dynamic.so 00:15:52.539 LIB libspdk_accel_error.a 00:15:52.539 SO libspdk_blob_bdev.so.11.0 00:15:52.539 LIB libspdk_accel_ioat.a 00:15:52.539 SO libspdk_keyring_file.so.2.0 00:15:52.539 SO libspdk_accel_error.so.2.0 00:15:52.539 SO libspdk_accel_ioat.so.6.0 00:15:52.539 SYMLINK libspdk_blob_bdev.so 00:15:52.539 SYMLINK libspdk_keyring_file.so 00:15:52.539 LIB libspdk_accel_dsa.a 00:15:52.797 LIB libspdk_accel_iaa.a 00:15:52.797 CC module/fsdev/aio/linux_aio_mgr.o 00:15:52.797 SYMLINK libspdk_accel_error.so 00:15:52.797 SO libspdk_accel_dsa.so.5.0 00:15:52.797 SYMLINK libspdk_accel_ioat.so 00:15:52.797 SO libspdk_accel_iaa.so.3.0 00:15:52.797 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:15:52.797 CC module/scheduler/gscheduler/gscheduler.o 00:15:52.797 SYMLINK libspdk_accel_dsa.so 00:15:52.797 SYMLINK libspdk_accel_iaa.so 00:15:52.797 CC module/keyring/linux/keyring.o 00:15:52.797 CC module/keyring/linux/keyring_rpc.o 00:15:52.797 LIB libspdk_scheduler_dpdk_governor.a 00:15:53.055 LIB libspdk_scheduler_gscheduler.a 00:15:53.055 SO libspdk_scheduler_dpdk_governor.so.4.0 00:15:53.055 SO libspdk_scheduler_gscheduler.so.4.0 00:15:53.055 CC module/bdev/gpt/gpt.o 00:15:53.055 CC module/bdev/error/vbdev_error.o 00:15:53.055 CC module/bdev/delay/vbdev_delay.o 00:15:53.055 CC module/blobfs/bdev/blobfs_bdev.o 00:15:53.055 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:15:53.055 SYMLINK libspdk_scheduler_dpdk_governor.so 00:15:53.055 CC module/bdev/delay/vbdev_delay_rpc.o 00:15:53.055 LIB libspdk_fsdev_aio.a 00:15:53.055 SYMLINK libspdk_scheduler_gscheduler.so 00:15:53.055 CC module/bdev/error/vbdev_error_rpc.o 00:15:53.055 LIB libspdk_keyring_linux.a 00:15:53.055 SO libspdk_fsdev_aio.so.1.0 00:15:53.055 LIB libspdk_sock_posix.a 00:15:53.055 SO libspdk_keyring_linux.so.1.0 00:15:53.312 SO libspdk_sock_posix.so.6.0 00:15:53.312 SYMLINK libspdk_keyring_linux.so 00:15:53.312 SYMLINK libspdk_fsdev_aio.so 00:15:53.312 CC module/bdev/gpt/vbdev_gpt.o 00:15:53.312 SYMLINK libspdk_sock_posix.so 00:15:53.312 LIB libspdk_bdev_error.a 00:15:53.312 LIB libspdk_blobfs_bdev.a 00:15:53.312 SO libspdk_blobfs_bdev.so.6.0 00:15:53.312 SO libspdk_bdev_error.so.6.0 00:15:53.570 SYMLINK libspdk_blobfs_bdev.so 00:15:53.570 LIB libspdk_bdev_delay.a 00:15:53.570 CC module/bdev/lvol/vbdev_lvol.o 00:15:53.570 SYMLINK libspdk_bdev_error.so 00:15:53.570 CC module/bdev/null/bdev_null.o 00:15:53.570 CC module/bdev/malloc/bdev_malloc.o 00:15:53.570 CC module/bdev/nvme/bdev_nvme.o 00:15:53.570 SO libspdk_bdev_delay.so.6.0 00:15:53.570 CC module/bdev/passthru/vbdev_passthru.o 00:15:53.570 CC module/bdev/raid/bdev_raid.o 00:15:53.570 SYMLINK libspdk_bdev_delay.so 00:15:53.570 CC module/bdev/raid/bdev_raid_rpc.o 00:15:53.570 LIB libspdk_bdev_gpt.a 00:15:53.570 CC module/bdev/split/vbdev_split.o 00:15:53.570 SO libspdk_bdev_gpt.so.6.0 00:15:53.828 CC module/bdev/zone_block/vbdev_zone_block.o 00:15:53.828 SYMLINK libspdk_bdev_gpt.so 00:15:53.828 CC module/bdev/split/vbdev_split_rpc.o 00:15:53.828 CC module/bdev/null/bdev_null_rpc.o 00:15:54.111 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:15:54.111 LIB libspdk_bdev_split.a 00:15:54.111 CC module/bdev/malloc/bdev_malloc_rpc.o 00:15:54.111 SO libspdk_bdev_split.so.6.0 00:15:54.111 LIB libspdk_bdev_null.a 00:15:54.111 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:15:54.111 SO libspdk_bdev_null.so.6.0 00:15:54.111 CC module/bdev/xnvme/bdev_xnvme.o 00:15:54.111 SYMLINK libspdk_bdev_split.so 00:15:54.111 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:15:54.111 LIB libspdk_bdev_passthru.a 00:15:54.111 CC module/bdev/aio/bdev_aio.o 00:15:54.111 CC module/bdev/aio/bdev_aio_rpc.o 00:15:54.111 SO libspdk_bdev_passthru.so.6.0 00:15:54.111 SYMLINK libspdk_bdev_null.so 00:15:54.111 CC module/bdev/raid/bdev_raid_sb.o 00:15:54.367 LIB libspdk_bdev_malloc.a 00:15:54.367 SYMLINK libspdk_bdev_passthru.so 00:15:54.367 CC module/bdev/raid/raid0.o 00:15:54.367 SO libspdk_bdev_malloc.so.6.0 00:15:54.367 LIB libspdk_bdev_zone_block.a 00:15:54.367 SO libspdk_bdev_zone_block.so.6.0 00:15:54.367 SYMLINK libspdk_bdev_malloc.so 00:15:54.367 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:15:54.367 CC module/bdev/raid/raid1.o 00:15:54.367 SYMLINK libspdk_bdev_zone_block.so 00:15:54.367 CC module/bdev/nvme/bdev_nvme_rpc.o 00:15:54.367 CC module/bdev/nvme/nvme_rpc.o 00:15:54.624 CC module/bdev/raid/concat.o 00:15:54.624 LIB libspdk_bdev_aio.a 00:15:54.624 LIB libspdk_bdev_lvol.a 00:15:54.624 LIB libspdk_bdev_xnvme.a 00:15:54.624 SO libspdk_bdev_aio.so.6.0 00:15:54.624 SO libspdk_bdev_lvol.so.6.0 00:15:54.624 SO libspdk_bdev_xnvme.so.3.0 00:15:54.624 SYMLINK libspdk_bdev_aio.so 00:15:54.624 SYMLINK libspdk_bdev_lvol.so 00:15:54.624 SYMLINK libspdk_bdev_xnvme.so 00:15:54.624 CC module/bdev/nvme/bdev_mdns_client.o 00:15:54.624 CC module/bdev/nvme/vbdev_opal.o 00:15:54.882 CC module/bdev/ftl/bdev_ftl.o 00:15:54.882 CC module/bdev/nvme/vbdev_opal_rpc.o 00:15:54.882 CC module/bdev/ftl/bdev_ftl_rpc.o 00:15:54.882 LIB libspdk_bdev_raid.a 00:15:54.882 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:15:54.882 SO libspdk_bdev_raid.so.6.0 00:15:54.882 CC module/bdev/iscsi/bdev_iscsi.o 00:15:55.140 CC module/bdev/virtio/bdev_virtio_scsi.o 00:15:55.140 CC module/bdev/virtio/bdev_virtio_blk.o 00:15:55.140 SYMLINK libspdk_bdev_raid.so 00:15:55.140 CC module/bdev/virtio/bdev_virtio_rpc.o 00:15:55.140 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:15:55.140 LIB libspdk_bdev_ftl.a 00:15:55.140 SO libspdk_bdev_ftl.so.6.0 00:15:55.140 SYMLINK libspdk_bdev_ftl.so 00:15:55.398 LIB libspdk_bdev_iscsi.a 00:15:55.398 SO libspdk_bdev_iscsi.so.6.0 00:15:55.398 SYMLINK libspdk_bdev_iscsi.so 00:15:55.657 LIB libspdk_bdev_virtio.a 00:15:55.657 SO libspdk_bdev_virtio.so.6.0 00:15:55.914 SYMLINK libspdk_bdev_virtio.so 00:15:56.480 LIB libspdk_bdev_nvme.a 00:15:56.757 SO libspdk_bdev_nvme.so.7.0 00:15:56.757 SYMLINK libspdk_bdev_nvme.so 00:15:57.327 CC module/event/subsystems/iobuf/iobuf.o 00:15:57.327 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:15:57.327 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:15:57.327 CC module/event/subsystems/scheduler/scheduler.o 00:15:57.327 CC module/event/subsystems/vmd/vmd_rpc.o 00:15:57.327 CC module/event/subsystems/fsdev/fsdev.o 00:15:57.327 CC module/event/subsystems/vmd/vmd.o 00:15:57.327 CC module/event/subsystems/keyring/keyring.o 00:15:57.327 CC module/event/subsystems/sock/sock.o 00:15:57.584 LIB libspdk_event_vhost_blk.a 00:15:57.584 LIB libspdk_event_sock.a 00:15:57.584 LIB libspdk_event_scheduler.a 00:15:57.584 LIB libspdk_event_keyring.a 00:15:57.584 SO libspdk_event_vhost_blk.so.3.0 00:15:57.584 SO libspdk_event_sock.so.5.0 00:15:57.584 LIB libspdk_event_vmd.a 00:15:57.584 LIB libspdk_event_fsdev.a 00:15:57.584 SO libspdk_event_scheduler.so.4.0 00:15:57.584 SO libspdk_event_keyring.so.1.0 00:15:57.584 LIB libspdk_event_iobuf.a 00:15:57.584 SO libspdk_event_fsdev.so.1.0 00:15:57.584 SO libspdk_event_vmd.so.6.0 00:15:57.584 SYMLINK libspdk_event_vhost_blk.so 00:15:57.584 SYMLINK libspdk_event_sock.so 00:15:57.584 SO libspdk_event_iobuf.so.3.0 00:15:57.584 SYMLINK libspdk_event_scheduler.so 00:15:57.584 SYMLINK libspdk_event_keyring.so 00:15:57.584 SYMLINK libspdk_event_vmd.so 00:15:57.584 SYMLINK libspdk_event_fsdev.so 00:15:57.841 SYMLINK libspdk_event_iobuf.so 00:15:58.098 CC module/event/subsystems/accel/accel.o 00:15:58.098 LIB libspdk_event_accel.a 00:15:58.098 SO libspdk_event_accel.so.6.0 00:15:58.356 SYMLINK libspdk_event_accel.so 00:15:58.613 CC module/event/subsystems/bdev/bdev.o 00:15:58.871 LIB libspdk_event_bdev.a 00:15:58.871 SO libspdk_event_bdev.so.6.0 00:15:58.871 SYMLINK libspdk_event_bdev.so 00:15:59.129 CC module/event/subsystems/scsi/scsi.o 00:15:59.129 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:15:59.129 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:15:59.129 CC module/event/subsystems/ublk/ublk.o 00:15:59.129 CC module/event/subsystems/nbd/nbd.o 00:15:59.386 LIB libspdk_event_ublk.a 00:15:59.386 LIB libspdk_event_scsi.a 00:15:59.386 LIB libspdk_event_nbd.a 00:15:59.386 SO libspdk_event_ublk.so.3.0 00:15:59.386 SO libspdk_event_scsi.so.6.0 00:15:59.386 SO libspdk_event_nbd.so.6.0 00:15:59.644 SYMLINK libspdk_event_ublk.so 00:15:59.644 LIB libspdk_event_nvmf.a 00:15:59.644 SYMLINK libspdk_event_nbd.so 00:15:59.644 SYMLINK libspdk_event_scsi.so 00:15:59.644 SO libspdk_event_nvmf.so.6.0 00:15:59.644 SYMLINK libspdk_event_nvmf.so 00:15:59.902 CC module/event/subsystems/iscsi/iscsi.o 00:15:59.902 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:15:59.902 LIB libspdk_event_iscsi.a 00:15:59.902 LIB libspdk_event_vhost_scsi.a 00:15:59.902 SO libspdk_event_iscsi.so.6.0 00:16:00.160 SO libspdk_event_vhost_scsi.so.3.0 00:16:00.160 SYMLINK libspdk_event_iscsi.so 00:16:00.160 SYMLINK libspdk_event_vhost_scsi.so 00:16:00.417 SO libspdk.so.6.0 00:16:00.417 SYMLINK libspdk.so 00:16:00.674 CC test/rpc_client/rpc_client_test.o 00:16:00.674 TEST_HEADER include/spdk/accel.h 00:16:00.674 TEST_HEADER include/spdk/accel_module.h 00:16:00.674 TEST_HEADER include/spdk/assert.h 00:16:00.674 TEST_HEADER include/spdk/barrier.h 00:16:00.674 CXX app/trace/trace.o 00:16:00.674 TEST_HEADER include/spdk/base64.h 00:16:00.674 TEST_HEADER include/spdk/bdev.h 00:16:00.674 TEST_HEADER include/spdk/bdev_module.h 00:16:00.674 TEST_HEADER include/spdk/bdev_zone.h 00:16:00.674 TEST_HEADER include/spdk/bit_array.h 00:16:00.674 TEST_HEADER include/spdk/bit_pool.h 00:16:00.674 TEST_HEADER include/spdk/blob_bdev.h 00:16:00.674 TEST_HEADER include/spdk/blobfs_bdev.h 00:16:00.674 TEST_HEADER include/spdk/blobfs.h 00:16:00.674 TEST_HEADER include/spdk/blob.h 00:16:00.674 TEST_HEADER include/spdk/conf.h 00:16:00.674 TEST_HEADER include/spdk/config.h 00:16:00.674 TEST_HEADER include/spdk/cpuset.h 00:16:00.674 TEST_HEADER include/spdk/crc16.h 00:16:00.674 CC examples/interrupt_tgt/interrupt_tgt.o 00:16:00.674 TEST_HEADER include/spdk/crc32.h 00:16:00.674 TEST_HEADER include/spdk/crc64.h 00:16:00.674 TEST_HEADER include/spdk/dif.h 00:16:00.674 TEST_HEADER include/spdk/dma.h 00:16:00.674 TEST_HEADER include/spdk/endian.h 00:16:00.674 TEST_HEADER include/spdk/env_dpdk.h 00:16:00.674 TEST_HEADER include/spdk/env.h 00:16:00.674 TEST_HEADER include/spdk/event.h 00:16:00.674 TEST_HEADER include/spdk/fd_group.h 00:16:00.674 TEST_HEADER include/spdk/fd.h 00:16:00.674 TEST_HEADER include/spdk/file.h 00:16:00.674 TEST_HEADER include/spdk/fsdev.h 00:16:00.674 TEST_HEADER include/spdk/fsdev_module.h 00:16:00.674 TEST_HEADER include/spdk/ftl.h 00:16:00.674 TEST_HEADER include/spdk/fuse_dispatcher.h 00:16:00.674 TEST_HEADER include/spdk/gpt_spec.h 00:16:00.674 TEST_HEADER include/spdk/hexlify.h 00:16:00.674 TEST_HEADER include/spdk/histogram_data.h 00:16:00.674 TEST_HEADER include/spdk/idxd.h 00:16:00.674 TEST_HEADER include/spdk/idxd_spec.h 00:16:00.674 TEST_HEADER include/spdk/init.h 00:16:00.674 CC test/thread/poller_perf/poller_perf.o 00:16:00.674 TEST_HEADER include/spdk/ioat.h 00:16:00.674 TEST_HEADER include/spdk/ioat_spec.h 00:16:00.675 TEST_HEADER include/spdk/iscsi_spec.h 00:16:00.675 TEST_HEADER include/spdk/json.h 00:16:00.675 CC examples/util/zipf/zipf.o 00:16:00.675 TEST_HEADER include/spdk/jsonrpc.h 00:16:00.675 TEST_HEADER include/spdk/keyring.h 00:16:00.675 CC examples/ioat/perf/perf.o 00:16:00.675 TEST_HEADER include/spdk/keyring_module.h 00:16:00.675 TEST_HEADER include/spdk/likely.h 00:16:00.675 TEST_HEADER include/spdk/log.h 00:16:00.675 TEST_HEADER include/spdk/lvol.h 00:16:00.675 TEST_HEADER include/spdk/md5.h 00:16:00.675 TEST_HEADER include/spdk/memory.h 00:16:00.675 TEST_HEADER include/spdk/mmio.h 00:16:00.675 CC test/dma/test_dma/test_dma.o 00:16:00.675 TEST_HEADER include/spdk/nbd.h 00:16:00.675 CC test/app/bdev_svc/bdev_svc.o 00:16:00.675 TEST_HEADER include/spdk/net.h 00:16:00.675 TEST_HEADER include/spdk/notify.h 00:16:00.675 TEST_HEADER include/spdk/nvme.h 00:16:00.675 TEST_HEADER include/spdk/nvme_intel.h 00:16:00.675 TEST_HEADER include/spdk/nvme_ocssd.h 00:16:00.675 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:16:00.675 TEST_HEADER include/spdk/nvme_spec.h 00:16:00.675 TEST_HEADER include/spdk/nvme_zns.h 00:16:00.675 TEST_HEADER include/spdk/nvmf_cmd.h 00:16:00.675 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:16:00.675 TEST_HEADER include/spdk/nvmf.h 00:16:00.675 TEST_HEADER include/spdk/nvmf_spec.h 00:16:00.675 TEST_HEADER include/spdk/nvmf_transport.h 00:16:00.675 TEST_HEADER include/spdk/opal.h 00:16:00.675 TEST_HEADER include/spdk/opal_spec.h 00:16:00.675 TEST_HEADER include/spdk/pci_ids.h 00:16:00.675 TEST_HEADER include/spdk/pipe.h 00:16:00.675 TEST_HEADER include/spdk/queue.h 00:16:00.675 TEST_HEADER include/spdk/reduce.h 00:16:00.675 TEST_HEADER include/spdk/rpc.h 00:16:00.675 TEST_HEADER include/spdk/scheduler.h 00:16:00.675 TEST_HEADER include/spdk/scsi.h 00:16:00.675 TEST_HEADER include/spdk/scsi_spec.h 00:16:00.675 TEST_HEADER include/spdk/sock.h 00:16:00.675 TEST_HEADER include/spdk/stdinc.h 00:16:00.675 TEST_HEADER include/spdk/string.h 00:16:00.675 TEST_HEADER include/spdk/thread.h 00:16:00.932 TEST_HEADER include/spdk/trace.h 00:16:00.932 TEST_HEADER include/spdk/trace_parser.h 00:16:00.932 TEST_HEADER include/spdk/tree.h 00:16:00.932 TEST_HEADER include/spdk/ublk.h 00:16:00.932 TEST_HEADER include/spdk/util.h 00:16:00.932 TEST_HEADER include/spdk/uuid.h 00:16:00.932 TEST_HEADER include/spdk/version.h 00:16:00.932 TEST_HEADER include/spdk/vfio_user_pci.h 00:16:00.932 TEST_HEADER include/spdk/vfio_user_spec.h 00:16:00.932 TEST_HEADER include/spdk/vhost.h 00:16:00.932 CC test/env/mem_callbacks/mem_callbacks.o 00:16:00.932 TEST_HEADER include/spdk/vmd.h 00:16:00.932 TEST_HEADER include/spdk/xor.h 00:16:00.932 TEST_HEADER include/spdk/zipf.h 00:16:00.932 CXX test/cpp_headers/accel.o 00:16:00.932 LINK rpc_client_test 00:16:00.932 LINK poller_perf 00:16:00.932 LINK interrupt_tgt 00:16:00.932 LINK bdev_svc 00:16:00.932 LINK zipf 00:16:00.932 LINK ioat_perf 00:16:01.190 CXX test/cpp_headers/accel_module.o 00:16:01.190 CXX test/cpp_headers/assert.o 00:16:01.190 LINK spdk_trace 00:16:01.190 CXX test/cpp_headers/barrier.o 00:16:01.190 CC test/env/vtophys/vtophys.o 00:16:01.190 CC examples/ioat/verify/verify.o 00:16:01.190 CXX test/cpp_headers/base64.o 00:16:01.448 LINK vtophys 00:16:01.448 LINK test_dma 00:16:01.448 CC examples/thread/thread/thread_ex.o 00:16:01.448 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:16:01.448 CC test/app/histogram_perf/histogram_perf.o 00:16:01.448 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:16:01.448 CC app/trace_record/trace_record.o 00:16:01.448 LINK mem_callbacks 00:16:01.448 CXX test/cpp_headers/bdev.o 00:16:01.448 LINK verify 00:16:01.448 LINK histogram_perf 00:16:01.706 CXX test/cpp_headers/bdev_module.o 00:16:01.706 LINK env_dpdk_post_init 00:16:01.706 CXX test/cpp_headers/bdev_zone.o 00:16:01.706 CC app/nvmf_tgt/nvmf_main.o 00:16:01.706 LINK thread 00:16:01.706 LINK spdk_trace_record 00:16:01.965 CC app/spdk_lspci/spdk_lspci.o 00:16:01.965 LINK nvmf_tgt 00:16:01.965 CC app/iscsi_tgt/iscsi_tgt.o 00:16:01.965 CXX test/cpp_headers/bit_array.o 00:16:01.965 CC app/spdk_tgt/spdk_tgt.o 00:16:01.965 LINK nvme_fuzz 00:16:01.965 CC test/env/memory/memory_ut.o 00:16:01.965 CC app/spdk_nvme_perf/perf.o 00:16:01.965 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:16:01.965 LINK spdk_lspci 00:16:01.965 CXX test/cpp_headers/bit_pool.o 00:16:02.224 CC examples/sock/hello_world/hello_sock.o 00:16:02.224 LINK iscsi_tgt 00:16:02.224 LINK spdk_tgt 00:16:02.224 CXX test/cpp_headers/blob_bdev.o 00:16:02.224 CC examples/vmd/lsvmd/lsvmd.o 00:16:02.224 CC examples/idxd/perf/perf.o 00:16:02.482 CC examples/fsdev/hello_world/hello_fsdev.o 00:16:02.482 LINK hello_sock 00:16:02.482 LINK lsvmd 00:16:02.482 CC examples/vmd/led/led.o 00:16:02.482 CXX test/cpp_headers/blobfs_bdev.o 00:16:02.740 LINK led 00:16:02.740 CC examples/accel/perf/accel_perf.o 00:16:02.740 CXX test/cpp_headers/blobfs.o 00:16:02.740 LINK hello_fsdev 00:16:02.740 LINK idxd_perf 00:16:02.998 CC examples/blob/hello_world/hello_blob.o 00:16:02.998 CC test/event/event_perf/event_perf.o 00:16:02.998 CXX test/cpp_headers/blob.o 00:16:02.998 CC test/event/reactor/reactor.o 00:16:02.998 LINK spdk_nvme_perf 00:16:02.998 LINK event_perf 00:16:02.998 CC test/app/jsoncat/jsoncat.o 00:16:02.998 CC app/spdk_nvme_identify/identify.o 00:16:02.998 CXX test/cpp_headers/conf.o 00:16:03.255 LINK hello_blob 00:16:03.255 LINK reactor 00:16:03.255 LINK jsoncat 00:16:03.255 CXX test/cpp_headers/config.o 00:16:03.255 LINK memory_ut 00:16:03.255 CXX test/cpp_headers/cpuset.o 00:16:03.255 CC test/app/stub/stub.o 00:16:03.512 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:16:03.512 LINK accel_perf 00:16:03.512 CC test/event/reactor_perf/reactor_perf.o 00:16:03.512 CXX test/cpp_headers/crc16.o 00:16:03.512 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:16:03.512 CC examples/blob/cli/blobcli.o 00:16:03.512 CC test/event/app_repeat/app_repeat.o 00:16:03.512 LINK stub 00:16:03.770 CC test/env/pci/pci_ut.o 00:16:03.770 LINK reactor_perf 00:16:03.770 CXX test/cpp_headers/crc32.o 00:16:03.770 LINK app_repeat 00:16:03.770 CC test/event/scheduler/scheduler.o 00:16:03.770 CC app/spdk_nvme_discover/discovery_aer.o 00:16:04.028 CXX test/cpp_headers/crc64.o 00:16:04.028 LINK scheduler 00:16:04.029 CXX test/cpp_headers/dif.o 00:16:04.029 LINK spdk_nvme_discover 00:16:04.029 CC test/nvme/aer/aer.o 00:16:04.287 LINK pci_ut 00:16:04.287 LINK iscsi_fuzz 00:16:04.287 LINK blobcli 00:16:04.287 LINK vhost_fuzz 00:16:04.287 CC examples/nvme/hello_world/hello_world.o 00:16:04.287 CXX test/cpp_headers/dma.o 00:16:04.287 LINK spdk_nvme_identify 00:16:04.287 CXX test/cpp_headers/endian.o 00:16:04.546 CXX test/cpp_headers/env_dpdk.o 00:16:04.546 LINK aer 00:16:04.546 LINK hello_world 00:16:04.546 CC test/accel/dif/dif.o 00:16:04.546 CC test/nvme/reset/reset.o 00:16:04.546 CC test/nvme/sgl/sgl.o 00:16:04.546 CC app/spdk_top/spdk_top.o 00:16:04.546 CC test/nvme/e2edp/nvme_dp.o 00:16:04.546 CC test/blobfs/mkfs/mkfs.o 00:16:04.805 CXX test/cpp_headers/env.o 00:16:04.805 CC test/lvol/esnap/esnap.o 00:16:04.805 CC examples/nvme/reconnect/reconnect.o 00:16:04.805 LINK mkfs 00:16:04.805 CXX test/cpp_headers/event.o 00:16:05.063 LINK reset 00:16:05.063 LINK sgl 00:16:05.063 CC examples/bdev/hello_world/hello_bdev.o 00:16:05.063 LINK nvme_dp 00:16:05.063 CXX test/cpp_headers/fd_group.o 00:16:05.321 CC test/nvme/overhead/overhead.o 00:16:05.321 CC test/nvme/err_injection/err_injection.o 00:16:05.321 CC test/nvme/startup/startup.o 00:16:05.321 LINK hello_bdev 00:16:05.321 CC test/nvme/reserve/reserve.o 00:16:05.321 CXX test/cpp_headers/fd.o 00:16:05.321 LINK reconnect 00:16:05.578 LINK startup 00:16:05.578 LINK err_injection 00:16:05.578 CXX test/cpp_headers/file.o 00:16:05.578 LINK dif 00:16:05.578 LINK reserve 00:16:05.578 LINK overhead 00:16:05.836 CC examples/nvme/nvme_manage/nvme_manage.o 00:16:05.836 CC examples/bdev/bdevperf/bdevperf.o 00:16:05.836 CXX test/cpp_headers/fsdev.o 00:16:05.836 CC test/nvme/simple_copy/simple_copy.o 00:16:05.836 LINK spdk_top 00:16:05.836 CC test/nvme/connect_stress/connect_stress.o 00:16:05.836 CC test/nvme/compliance/nvme_compliance.o 00:16:05.836 CC test/nvme/boot_partition/boot_partition.o 00:16:05.836 CXX test/cpp_headers/fsdev_module.o 00:16:06.094 LINK simple_copy 00:16:06.094 LINK connect_stress 00:16:06.094 LINK boot_partition 00:16:06.094 CXX test/cpp_headers/ftl.o 00:16:06.094 CC app/vhost/vhost.o 00:16:06.094 CC test/bdev/bdevio/bdevio.o 00:16:06.351 LINK nvme_compliance 00:16:06.351 LINK nvme_manage 00:16:06.351 LINK vhost 00:16:06.351 CXX test/cpp_headers/fuse_dispatcher.o 00:16:06.351 CC test/nvme/fused_ordering/fused_ordering.o 00:16:06.351 CC test/nvme/fdp/fdp.o 00:16:06.351 CC test/nvme/doorbell_aers/doorbell_aers.o 00:16:06.609 CXX test/cpp_headers/gpt_spec.o 00:16:06.609 CC test/nvme/cuse/cuse.o 00:16:06.609 LINK fused_ordering 00:16:06.609 LINK doorbell_aers 00:16:06.609 LINK bdevio 00:16:06.609 CC examples/nvme/arbitration/arbitration.o 00:16:06.867 LINK bdevperf 00:16:06.867 CC app/spdk_dd/spdk_dd.o 00:16:06.867 CXX test/cpp_headers/hexlify.o 00:16:06.867 LINK fdp 00:16:06.867 CXX test/cpp_headers/histogram_data.o 00:16:06.867 CXX test/cpp_headers/idxd.o 00:16:06.867 CC examples/nvme/hotplug/hotplug.o 00:16:07.125 CXX test/cpp_headers/idxd_spec.o 00:16:07.125 LINK arbitration 00:16:07.125 CC examples/nvme/cmb_copy/cmb_copy.o 00:16:07.125 CC examples/nvme/abort/abort.o 00:16:07.125 LINK spdk_dd 00:16:07.125 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:16:07.125 CC app/fio/nvme/fio_plugin.o 00:16:07.125 CXX test/cpp_headers/init.o 00:16:07.382 LINK hotplug 00:16:07.382 CXX test/cpp_headers/ioat.o 00:16:07.382 LINK pmr_persistence 00:16:07.382 LINK cmb_copy 00:16:07.382 CXX test/cpp_headers/ioat_spec.o 00:16:07.640 CXX test/cpp_headers/iscsi_spec.o 00:16:07.640 CC app/fio/bdev/fio_plugin.o 00:16:07.640 CXX test/cpp_headers/json.o 00:16:07.640 CXX test/cpp_headers/jsonrpc.o 00:16:07.640 LINK abort 00:16:07.640 CXX test/cpp_headers/keyring.o 00:16:07.640 CXX test/cpp_headers/keyring_module.o 00:16:07.640 CXX test/cpp_headers/likely.o 00:16:07.899 CXX test/cpp_headers/log.o 00:16:07.899 CXX test/cpp_headers/lvol.o 00:16:07.899 CXX test/cpp_headers/md5.o 00:16:07.899 CXX test/cpp_headers/memory.o 00:16:07.899 CXX test/cpp_headers/mmio.o 00:16:07.899 LINK spdk_nvme 00:16:07.899 CXX test/cpp_headers/nbd.o 00:16:08.159 CXX test/cpp_headers/net.o 00:16:08.159 CXX test/cpp_headers/notify.o 00:16:08.159 CXX test/cpp_headers/nvme.o 00:16:08.159 CXX test/cpp_headers/nvme_intel.o 00:16:08.159 CC examples/nvmf/nvmf/nvmf.o 00:16:08.159 CXX test/cpp_headers/nvme_ocssd.o 00:16:08.159 CXX test/cpp_headers/nvme_ocssd_spec.o 00:16:08.159 LINK cuse 00:16:08.159 CXX test/cpp_headers/nvme_spec.o 00:16:08.159 LINK spdk_bdev 00:16:08.159 CXX test/cpp_headers/nvme_zns.o 00:16:08.427 CXX test/cpp_headers/nvmf_cmd.o 00:16:08.427 CXX test/cpp_headers/nvmf_fc_spec.o 00:16:08.427 CXX test/cpp_headers/nvmf.o 00:16:08.427 CXX test/cpp_headers/nvmf_spec.o 00:16:08.427 CXX test/cpp_headers/nvmf_transport.o 00:16:08.427 CXX test/cpp_headers/opal.o 00:16:08.427 CXX test/cpp_headers/opal_spec.o 00:16:08.427 LINK nvmf 00:16:08.427 CXX test/cpp_headers/pci_ids.o 00:16:08.427 CXX test/cpp_headers/pipe.o 00:16:08.685 CXX test/cpp_headers/queue.o 00:16:08.685 CXX test/cpp_headers/reduce.o 00:16:08.685 CXX test/cpp_headers/rpc.o 00:16:08.685 CXX test/cpp_headers/scheduler.o 00:16:08.685 CXX test/cpp_headers/scsi.o 00:16:08.685 CXX test/cpp_headers/scsi_spec.o 00:16:08.685 CXX test/cpp_headers/sock.o 00:16:08.685 CXX test/cpp_headers/stdinc.o 00:16:08.685 CXX test/cpp_headers/string.o 00:16:08.685 CXX test/cpp_headers/thread.o 00:16:08.685 CXX test/cpp_headers/trace.o 00:16:08.685 CXX test/cpp_headers/trace_parser.o 00:16:08.944 CXX test/cpp_headers/tree.o 00:16:08.944 CXX test/cpp_headers/ublk.o 00:16:08.944 CXX test/cpp_headers/util.o 00:16:08.944 CXX test/cpp_headers/uuid.o 00:16:08.944 CXX test/cpp_headers/version.o 00:16:08.944 CXX test/cpp_headers/vfio_user_pci.o 00:16:08.944 CXX test/cpp_headers/vfio_user_spec.o 00:16:08.944 CXX test/cpp_headers/vhost.o 00:16:08.944 CXX test/cpp_headers/vmd.o 00:16:08.944 CXX test/cpp_headers/xor.o 00:16:08.944 CXX test/cpp_headers/zipf.o 00:16:12.224 LINK esnap 00:16:12.224 ************************************ 00:16:12.224 END TEST make 00:16:12.224 ************************************ 00:16:12.224 00:16:12.224 real 1m44.215s 00:16:12.224 user 9m42.411s 00:16:12.224 sys 2m12.119s 00:16:12.224 18:43:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:16:12.224 18:43:40 make -- common/autotest_common.sh@10 -- $ set +x 00:16:12.224 18:43:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:16:12.224 18:43:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:16:12.224 18:43:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:16:12.224 18:43:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:12.224 18:43:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:16:12.224 18:43:40 -- pm/common@44 -- $ pid=5335 00:16:12.224 18:43:40 -- pm/common@50 -- $ kill -TERM 5335 00:16:12.224 18:43:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:16:12.224 18:43:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:16:12.224 18:43:40 -- pm/common@44 -- $ pid=5337 00:16:12.224 18:43:40 -- pm/common@50 -- $ kill -TERM 5337 00:16:12.482 18:43:40 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:12.482 18:43:40 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:12.482 18:43:40 -- common/autotest_common.sh@1681 -- # lcov --version 00:16:12.482 18:43:41 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:12.482 18:43:41 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.482 18:43:41 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.482 18:43:41 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.482 18:43:41 -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.482 18:43:41 -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.482 18:43:41 -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.482 18:43:41 -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.482 18:43:41 -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.482 18:43:41 -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.482 18:43:41 -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.482 18:43:41 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.482 18:43:41 -- scripts/common.sh@344 -- # case "$op" in 00:16:12.482 18:43:41 -- scripts/common.sh@345 -- # : 1 00:16:12.482 18:43:41 -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.482 18:43:41 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.482 18:43:41 -- scripts/common.sh@365 -- # decimal 1 00:16:12.482 18:43:41 -- scripts/common.sh@353 -- # local d=1 00:16:12.482 18:43:41 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.482 18:43:41 -- scripts/common.sh@355 -- # echo 1 00:16:12.482 18:43:41 -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.482 18:43:41 -- scripts/common.sh@366 -- # decimal 2 00:16:12.482 18:43:41 -- scripts/common.sh@353 -- # local d=2 00:16:12.482 18:43:41 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.482 18:43:41 -- scripts/common.sh@355 -- # echo 2 00:16:12.482 18:43:41 -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.482 18:43:41 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.482 18:43:41 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.482 18:43:41 -- scripts/common.sh@368 -- # return 0 00:16:12.482 18:43:41 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.482 18:43:41 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:12.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.482 --rc genhtml_branch_coverage=1 00:16:12.482 --rc genhtml_function_coverage=1 00:16:12.482 --rc genhtml_legend=1 00:16:12.482 --rc geninfo_all_blocks=1 00:16:12.482 --rc geninfo_unexecuted_blocks=1 00:16:12.482 00:16:12.482 ' 00:16:12.482 18:43:41 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:12.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.482 --rc genhtml_branch_coverage=1 00:16:12.482 --rc genhtml_function_coverage=1 00:16:12.482 --rc genhtml_legend=1 00:16:12.482 --rc geninfo_all_blocks=1 00:16:12.482 --rc geninfo_unexecuted_blocks=1 00:16:12.483 00:16:12.483 ' 00:16:12.483 18:43:41 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:12.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.483 --rc genhtml_branch_coverage=1 00:16:12.483 --rc genhtml_function_coverage=1 00:16:12.483 --rc genhtml_legend=1 00:16:12.483 --rc geninfo_all_blocks=1 00:16:12.483 --rc geninfo_unexecuted_blocks=1 00:16:12.483 00:16:12.483 ' 00:16:12.483 18:43:41 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:12.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.483 --rc genhtml_branch_coverage=1 00:16:12.483 --rc genhtml_function_coverage=1 00:16:12.483 --rc genhtml_legend=1 00:16:12.483 --rc geninfo_all_blocks=1 00:16:12.483 --rc geninfo_unexecuted_blocks=1 00:16:12.483 00:16:12.483 ' 00:16:12.483 18:43:41 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.483 18:43:41 -- nvmf/common.sh@7 -- # uname -s 00:16:12.483 18:43:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.483 18:43:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.483 18:43:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.483 18:43:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.483 18:43:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.483 18:43:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.483 18:43:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.483 18:43:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.483 18:43:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.483 18:43:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.483 18:43:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b30ccf6-f6d8-4ff4-85d2-d61da9ea3b67 00:16:12.483 18:43:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=2b30ccf6-f6d8-4ff4-85d2-d61da9ea3b67 00:16:12.483 18:43:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.483 18:43:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.483 18:43:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:12.483 18:43:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.483 18:43:41 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.483 18:43:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.483 18:43:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.483 18:43:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.483 18:43:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.483 18:43:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.483 18:43:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.483 18:43:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.483 18:43:41 -- paths/export.sh@5 -- # export PATH 00:16:12.483 18:43:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.483 18:43:41 -- nvmf/common.sh@51 -- # : 0 00:16:12.483 18:43:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.483 18:43:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.483 18:43:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.483 18:43:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.483 18:43:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.483 18:43:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.483 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.483 18:43:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.483 18:43:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.483 18:43:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.483 18:43:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:16:12.483 18:43:41 -- spdk/autotest.sh@32 -- # uname -s 00:16:12.483 18:43:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:16:12.483 18:43:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:16:12.483 18:43:41 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:12.483 18:43:41 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:16:12.483 18:43:41 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:16:12.483 18:43:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:16:12.483 18:43:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:16:12.483 18:43:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:16:12.483 18:43:41 -- spdk/autotest.sh@48 -- # udevadm_pid=55359 00:16:12.483 18:43:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:16:12.483 18:43:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:16:12.483 18:43:41 -- pm/common@17 -- # local monitor 00:16:12.483 18:43:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:12.483 18:43:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:16:12.483 18:43:41 -- pm/common@25 -- # sleep 1 00:16:12.483 18:43:41 -- pm/common@21 -- # date +%s 00:16:12.483 18:43:41 -- pm/common@21 -- # date +%s 00:16:12.483 18:43:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728413021 00:16:12.483 18:43:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728413021 00:16:12.741 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728413021_collect-cpu-load.pm.log 00:16:12.741 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728413021_collect-vmstat.pm.log 00:16:13.674 18:43:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:16:13.674 18:43:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:16:13.674 18:43:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:13.674 18:43:42 -- common/autotest_common.sh@10 -- # set +x 00:16:13.674 18:43:42 -- spdk/autotest.sh@59 -- # create_test_list 00:16:13.674 18:43:42 -- common/autotest_common.sh@748 -- # xtrace_disable 00:16:13.674 18:43:42 -- common/autotest_common.sh@10 -- # set +x 00:16:13.674 18:43:42 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:16:13.674 18:43:42 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:16:13.674 18:43:42 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:16:13.674 18:43:42 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:16:13.674 18:43:42 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:16:13.674 18:43:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:16:13.674 18:43:42 -- common/autotest_common.sh@1455 -- # uname 00:16:13.674 18:43:42 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:16:13.674 18:43:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:16:13.674 18:43:42 -- common/autotest_common.sh@1475 -- # uname 00:16:13.674 18:43:42 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:16:13.674 18:43:42 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:16:13.674 18:43:42 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:16:13.674 lcov: LCOV version 1.15 00:16:13.674 18:43:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:16:31.750 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:16:31.750 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:16:49.972 18:44:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:16:49.972 18:44:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:49.972 18:44:18 -- common/autotest_common.sh@10 -- # set +x 00:16:49.972 18:44:18 -- spdk/autotest.sh@78 -- # rm -f 00:16:49.972 18:44:18 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:50.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:51.101 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:51.101 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:51.101 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:51.101 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:51.101 18:44:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:16:51.101 18:44:19 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:16:51.101 18:44:19 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:16:51.101 18:44:19 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:16:51.101 18:44:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:51.101 18:44:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:51.101 18:44:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:51.101 18:44:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:51.101 18:44:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:16:51.101 18:44:19 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:16:51.101 18:44:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:51.101 18:44:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:16:51.101 18:44:19 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:16:51.101 18:44:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:51.101 18:44:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:51.101 18:44:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:16:51.101 18:44:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:51.101 18:44:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:51.101 18:44:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:16:51.359 18:44:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:51.359 18:44:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:51.359 18:44:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:16:51.359 18:44:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:51.359 18:44:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:51.359 No valid GPT data, bailing 00:16:51.359 18:44:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:51.359 18:44:19 -- scripts/common.sh@394 -- # pt= 00:16:51.359 18:44:19 -- scripts/common.sh@395 -- # return 1 00:16:51.359 18:44:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:16:51.359 1+0 records in 00:16:51.359 1+0 records out 00:16:51.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125214 s, 83.7 MB/s 00:16:51.359 18:44:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:51.359 18:44:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:51.359 18:44:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:16:51.359 18:44:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:16:51.359 18:44:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:16:51.359 No valid GPT data, bailing 00:16:51.359 18:44:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:51.359 18:44:20 -- scripts/common.sh@394 -- # pt= 00:16:51.359 18:44:20 -- scripts/common.sh@395 -- # return 1 00:16:51.359 18:44:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:16:51.359 1+0 records in 00:16:51.359 1+0 records out 00:16:51.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00369932 s, 283 MB/s 00:16:51.359 18:44:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:51.359 18:44:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:51.359 18:44:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:16:51.359 18:44:20 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:16:51.359 18:44:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:16:51.359 No valid GPT data, bailing 00:16:51.616 18:44:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:16:51.616 18:44:20 -- scripts/common.sh@394 -- # pt= 00:16:51.616 18:44:20 -- scripts/common.sh@395 -- # return 1 00:16:51.616 18:44:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:16:51.616 1+0 records in 00:16:51.616 1+0 records out 00:16:51.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491934 s, 213 MB/s 00:16:51.616 18:44:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:51.616 18:44:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:51.616 18:44:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:16:51.616 18:44:20 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:16:51.616 18:44:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:16:51.616 No valid GPT data, bailing 00:16:51.616 18:44:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:16:51.616 18:44:20 -- scripts/common.sh@394 -- # pt= 00:16:51.616 18:44:20 -- scripts/common.sh@395 -- # return 1 00:16:51.616 18:44:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:16:51.616 1+0 records in 00:16:51.616 1+0 records out 00:16:51.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365901 s, 287 MB/s 00:16:51.616 18:44:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:51.616 18:44:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:51.616 18:44:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:16:51.616 18:44:20 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:16:51.616 18:44:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:16:51.616 No valid GPT data, bailing 00:16:51.616 18:44:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:16:51.616 18:44:20 -- scripts/common.sh@394 -- # pt= 00:16:51.616 18:44:20 -- scripts/common.sh@395 -- # return 1 00:16:51.616 18:44:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:16:51.616 1+0 records in 00:16:51.616 1+0 records out 00:16:51.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00424826 s, 247 MB/s 00:16:51.616 18:44:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:51.616 18:44:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:51.616 18:44:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:16:51.616 18:44:20 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:16:51.616 18:44:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:16:51.874 No valid GPT data, bailing 00:16:51.874 18:44:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:16:51.874 18:44:20 -- scripts/common.sh@394 -- # pt= 00:16:51.874 18:44:20 -- scripts/common.sh@395 -- # return 1 00:16:51.874 18:44:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:16:51.874 1+0 records in 00:16:51.874 1+0 records out 00:16:51.874 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00351249 s, 299 MB/s 00:16:51.874 18:44:20 -- spdk/autotest.sh@105 -- # sync 00:16:51.874 18:44:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:16:51.874 18:44:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:16:51.874 18:44:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:16:54.402 18:44:22 -- spdk/autotest.sh@111 -- # uname -s 00:16:54.402 18:44:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:16:54.402 18:44:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:16:54.402 18:44:22 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:54.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:54.968 Hugepages 00:16:54.968 node hugesize free / total 00:16:54.968 node0 1048576kB 0 / 0 00:16:54.968 node0 2048kB 0 / 0 00:16:54.968 00:16:54.968 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:54.968 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:16:55.225 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:16:55.225 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:16:55.225 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:16:55.225 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:16:55.225 18:44:23 -- spdk/autotest.sh@117 -- # uname -s 00:16:55.225 18:44:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:16:55.225 18:44:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:16:55.225 18:44:23 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:55.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:56.358 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:56.358 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:56.358 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:56.615 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:56.615 18:44:25 -- common/autotest_common.sh@1515 -- # sleep 1 00:16:57.553 18:44:26 -- common/autotest_common.sh@1516 -- # bdfs=() 00:16:57.553 18:44:26 -- common/autotest_common.sh@1516 -- # local bdfs 00:16:57.553 18:44:26 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:16:57.553 18:44:26 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:16:57.553 18:44:26 -- common/autotest_common.sh@1496 -- # bdfs=() 00:16:57.553 18:44:26 -- common/autotest_common.sh@1496 -- # local bdfs 00:16:57.553 18:44:26 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:57.553 18:44:26 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:57.553 18:44:26 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:16:57.810 18:44:26 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:16:57.810 18:44:26 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:57.810 18:44:26 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:58.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:58.068 Waiting for block devices as requested 00:16:58.335 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.335 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.335 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.594 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:03.857 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:03.857 18:44:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:17:03.857 18:44:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:03.857 18:44:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:17:03.857 18:44:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:17:03.857 18:44:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:17:03.857 18:44:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:17:03.857 18:44:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:17:03.857 18:44:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:17:03.857 18:44:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1541 -- # continue 00:17:03.857 18:44:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:17:03.857 18:44:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:17:03.857 18:44:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:17:03.857 18:44:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:17:03.857 18:44:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1541 -- # continue 00:17:03.857 18:44:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:17:03.857 18:44:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:17:03.857 18:44:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:17:03.857 18:44:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:17:03.857 18:44:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1541 -- # continue 00:17:03.857 18:44:32 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:17:03.857 18:44:32 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # grep oacs 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:17:03.857 18:44:32 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:17:03.857 18:44:32 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:17:03.857 18:44:32 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:17:03.858 18:44:32 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:17:03.858 18:44:32 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:17:03.858 18:44:32 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:17:03.858 18:44:32 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:17:03.858 18:44:32 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:17:03.858 18:44:32 -- common/autotest_common.sh@1541 -- # continue 00:17:03.858 18:44:32 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:17:03.858 18:44:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.858 18:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:03.858 18:44:32 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:17:03.858 18:44:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:03.858 18:44:32 -- common/autotest_common.sh@10 -- # set +x 00:17:03.858 18:44:32 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:04.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.988 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.988 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.988 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.988 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.988 18:44:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:17:04.988 18:44:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.988 18:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:04.988 18:44:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:17:04.988 18:44:33 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:17:04.988 18:44:33 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:17:04.988 18:44:33 -- common/autotest_common.sh@1561 -- # bdfs=() 00:17:04.988 18:44:33 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:17:04.988 18:44:33 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:17:04.988 18:44:33 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:17:04.988 18:44:33 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:17:04.988 18:44:33 -- common/autotest_common.sh@1496 -- # bdfs=() 00:17:04.988 18:44:33 -- common/autotest_common.sh@1496 -- # local bdfs 00:17:04.988 18:44:33 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:04.988 18:44:33 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:17:04.988 18:44:33 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:05.245 18:44:33 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:17:05.245 18:44:33 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:05.245 18:44:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:17:05.245 18:44:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:17:05.245 18:44:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:17:05.245 18:44:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:05.245 18:44:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:17:05.245 18:44:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:17:05.245 18:44:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:17:05.245 18:44:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:05.245 18:44:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:17:05.245 18:44:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:17:05.245 18:44:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:17:05.245 18:44:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:05.245 18:44:33 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:17:05.245 18:44:33 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:17:05.245 18:44:33 -- common/autotest_common.sh@1564 -- # device=0x0010 00:17:05.245 18:44:33 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:17:05.245 18:44:33 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:17:05.245 18:44:33 -- common/autotest_common.sh@1570 -- # return 0 00:17:05.245 18:44:33 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:17:05.245 18:44:33 -- common/autotest_common.sh@1578 -- # return 0 00:17:05.245 18:44:33 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:17:05.245 18:44:33 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:17:05.245 18:44:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:05.245 18:44:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:17:05.245 18:44:33 -- spdk/autotest.sh@149 -- # timing_enter lib 00:17:05.245 18:44:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:05.245 18:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:05.245 18:44:33 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:17:05.245 18:44:33 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:05.245 18:44:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:05.245 18:44:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.245 18:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:05.245 ************************************ 00:17:05.245 START TEST env 00:17:05.245 ************************************ 00:17:05.245 18:44:33 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:17:05.245 * Looking for test storage... 00:17:05.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:17:05.245 18:44:33 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:05.245 18:44:33 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:05.245 18:44:33 env -- common/autotest_common.sh@1681 -- # lcov --version 00:17:05.503 18:44:34 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:05.503 18:44:34 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.503 18:44:34 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.503 18:44:34 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.503 18:44:34 env -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.503 18:44:34 env -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.503 18:44:34 env -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.503 18:44:34 env -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.503 18:44:34 env -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.503 18:44:34 env -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.503 18:44:34 env -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.503 18:44:34 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.503 18:44:34 env -- scripts/common.sh@344 -- # case "$op" in 00:17:05.503 18:44:34 env -- scripts/common.sh@345 -- # : 1 00:17:05.503 18:44:34 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.503 18:44:34 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.503 18:44:34 env -- scripts/common.sh@365 -- # decimal 1 00:17:05.503 18:44:34 env -- scripts/common.sh@353 -- # local d=1 00:17:05.503 18:44:34 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.503 18:44:34 env -- scripts/common.sh@355 -- # echo 1 00:17:05.503 18:44:34 env -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.503 18:44:34 env -- scripts/common.sh@366 -- # decimal 2 00:17:05.503 18:44:34 env -- scripts/common.sh@353 -- # local d=2 00:17:05.503 18:44:34 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.503 18:44:34 env -- scripts/common.sh@355 -- # echo 2 00:17:05.503 18:44:34 env -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.503 18:44:34 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.503 18:44:34 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.503 18:44:34 env -- scripts/common.sh@368 -- # return 0 00:17:05.503 18:44:34 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.503 18:44:34 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:05.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.503 --rc genhtml_branch_coverage=1 00:17:05.503 --rc genhtml_function_coverage=1 00:17:05.503 --rc genhtml_legend=1 00:17:05.503 --rc geninfo_all_blocks=1 00:17:05.503 --rc geninfo_unexecuted_blocks=1 00:17:05.503 00:17:05.503 ' 00:17:05.503 18:44:34 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:05.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.503 --rc genhtml_branch_coverage=1 00:17:05.503 --rc genhtml_function_coverage=1 00:17:05.503 --rc genhtml_legend=1 00:17:05.503 --rc geninfo_all_blocks=1 00:17:05.503 --rc geninfo_unexecuted_blocks=1 00:17:05.503 00:17:05.503 ' 00:17:05.503 18:44:34 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:05.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.503 --rc genhtml_branch_coverage=1 00:17:05.503 --rc genhtml_function_coverage=1 00:17:05.503 --rc genhtml_legend=1 00:17:05.503 --rc geninfo_all_blocks=1 00:17:05.503 --rc geninfo_unexecuted_blocks=1 00:17:05.503 00:17:05.503 ' 00:17:05.503 18:44:34 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:05.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.503 --rc genhtml_branch_coverage=1 00:17:05.503 --rc genhtml_function_coverage=1 00:17:05.503 --rc genhtml_legend=1 00:17:05.503 --rc geninfo_all_blocks=1 00:17:05.503 --rc geninfo_unexecuted_blocks=1 00:17:05.503 00:17:05.503 ' 00:17:05.503 18:44:34 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:05.503 18:44:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:05.503 18:44:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.503 18:44:34 env -- common/autotest_common.sh@10 -- # set +x 00:17:05.503 ************************************ 00:17:05.503 START TEST env_memory 00:17:05.503 ************************************ 00:17:05.503 18:44:34 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:17:05.503 00:17:05.503 00:17:05.503 CUnit - A unit testing framework for C - Version 2.1-3 00:17:05.503 http://cunit.sourceforge.net/ 00:17:05.503 00:17:05.503 00:17:05.503 Suite: memory 00:17:05.503 Test: alloc and free memory map ...[2024-10-08 18:44:34.102235] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:17:05.503 passed 00:17:05.503 Test: mem map translation ...[2024-10-08 18:44:34.156319] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:17:05.503 [2024-10-08 18:44:34.156408] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:17:05.503 [2024-10-08 18:44:34.156489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:17:05.503 [2024-10-08 18:44:34.156520] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:17:05.503 passed 00:17:05.503 Test: mem map registration ...[2024-10-08 18:44:34.242186] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:17:05.503 [2024-10-08 18:44:34.242299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:17:05.761 passed 00:17:05.761 Test: mem map adjacent registrations ...passed 00:17:05.761 00:17:05.761 Run Summary: Type Total Ran Passed Failed Inactive 00:17:05.761 suites 1 1 n/a 0 0 00:17:05.761 tests 4 4 4 0 0 00:17:05.761 asserts 152 152 152 0 n/a 00:17:05.761 00:17:05.761 Elapsed time = 0.293 seconds 00:17:05.761 00:17:05.761 real 0m0.327s 00:17:05.761 user 0m0.299s 00:17:05.761 sys 0m0.022s 00:17:05.761 18:44:34 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.761 18:44:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:17:05.761 ************************************ 00:17:05.761 END TEST env_memory 00:17:05.761 ************************************ 00:17:05.761 18:44:34 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:05.761 18:44:34 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:05.761 18:44:34 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.761 18:44:34 env -- common/autotest_common.sh@10 -- # set +x 00:17:05.761 ************************************ 00:17:05.761 START TEST env_vtophys 00:17:05.761 ************************************ 00:17:05.761 18:44:34 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:17:05.761 EAL: lib.eal log level changed from notice to debug 00:17:05.761 EAL: Detected lcore 0 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 1 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 2 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 3 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 4 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 5 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 6 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 7 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 8 as core 0 on socket 0 00:17:05.761 EAL: Detected lcore 9 as core 0 on socket 0 00:17:05.761 EAL: Maximum logical cores by configuration: 128 00:17:05.761 EAL: Detected CPU lcores: 10 00:17:05.761 EAL: Detected NUMA nodes: 1 00:17:05.761 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:17:05.761 EAL: Detected shared linkage of DPDK 00:17:05.761 EAL: No shared files mode enabled, IPC will be disabled 00:17:05.761 EAL: Selected IOVA mode 'PA' 00:17:05.761 EAL: Probing VFIO support... 00:17:05.761 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:05.761 EAL: VFIO modules not loaded, skipping VFIO support... 00:17:05.761 EAL: Ask a virtual area of 0x2e000 bytes 00:17:05.761 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:17:05.761 EAL: Setting up physically contiguous memory... 00:17:05.761 EAL: Setting maximum number of open files to 524288 00:17:05.761 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:17:05.761 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:17:05.761 EAL: Ask a virtual area of 0x61000 bytes 00:17:05.761 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:17:05.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:05.761 EAL: Ask a virtual area of 0x400000000 bytes 00:17:05.761 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:17:05.761 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:17:05.761 EAL: Ask a virtual area of 0x61000 bytes 00:17:05.761 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:17:05.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:05.761 EAL: Ask a virtual area of 0x400000000 bytes 00:17:05.761 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:17:05.761 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:17:05.761 EAL: Ask a virtual area of 0x61000 bytes 00:17:05.761 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:17:05.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:05.761 EAL: Ask a virtual area of 0x400000000 bytes 00:17:05.761 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:17:05.761 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:17:05.761 EAL: Ask a virtual area of 0x61000 bytes 00:17:05.761 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:17:05.761 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:17:05.761 EAL: Ask a virtual area of 0x400000000 bytes 00:17:05.761 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:17:05.761 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:17:05.761 EAL: Hugepages will be freed exactly as allocated. 00:17:05.761 EAL: No shared files mode enabled, IPC is disabled 00:17:05.761 EAL: No shared files mode enabled, IPC is disabled 00:17:06.019 EAL: TSC frequency is ~2100000 KHz 00:17:06.019 EAL: Main lcore 0 is ready (tid=7f477a2c3a40;cpuset=[0]) 00:17:06.019 EAL: Trying to obtain current memory policy. 00:17:06.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:06.019 EAL: Restoring previous memory policy: 0 00:17:06.019 EAL: request: mp_malloc_sync 00:17:06.019 EAL: No shared files mode enabled, IPC is disabled 00:17:06.019 EAL: Heap on socket 0 was expanded by 2MB 00:17:06.019 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:17:06.019 EAL: No PCI address specified using 'addr=' in: bus=pci 00:17:06.019 EAL: Mem event callback 'spdk:(nil)' registered 00:17:06.019 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:17:06.019 00:17:06.019 00:17:06.019 CUnit - A unit testing framework for C - Version 2.1-3 00:17:06.019 http://cunit.sourceforge.net/ 00:17:06.019 00:17:06.019 00:17:06.019 Suite: components_suite 00:17:06.585 Test: vtophys_malloc_test ...passed 00:17:06.585 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:17:06.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:06.585 EAL: Restoring previous memory policy: 4 00:17:06.585 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.585 EAL: request: mp_malloc_sync 00:17:06.585 EAL: No shared files mode enabled, IPC is disabled 00:17:06.585 EAL: Heap on socket 0 was expanded by 4MB 00:17:06.585 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.585 EAL: request: mp_malloc_sync 00:17:06.585 EAL: No shared files mode enabled, IPC is disabled 00:17:06.585 EAL: Heap on socket 0 was shrunk by 4MB 00:17:06.585 EAL: Trying to obtain current memory policy. 00:17:06.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:06.585 EAL: Restoring previous memory policy: 4 00:17:06.585 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.585 EAL: request: mp_malloc_sync 00:17:06.585 EAL: No shared files mode enabled, IPC is disabled 00:17:06.585 EAL: Heap on socket 0 was expanded by 6MB 00:17:06.585 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.585 EAL: request: mp_malloc_sync 00:17:06.585 EAL: No shared files mode enabled, IPC is disabled 00:17:06.585 EAL: Heap on socket 0 was shrunk by 6MB 00:17:06.585 EAL: Trying to obtain current memory policy. 00:17:06.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:06.585 EAL: Restoring previous memory policy: 4 00:17:06.585 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.585 EAL: request: mp_malloc_sync 00:17:06.585 EAL: No shared files mode enabled, IPC is disabled 00:17:06.585 EAL: Heap on socket 0 was expanded by 10MB 00:17:06.585 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.585 EAL: request: mp_malloc_sync 00:17:06.585 EAL: No shared files mode enabled, IPC is disabled 00:17:06.585 EAL: Heap on socket 0 was shrunk by 10MB 00:17:06.585 EAL: Trying to obtain current memory policy. 00:17:06.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:06.586 EAL: Restoring previous memory policy: 4 00:17:06.586 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.586 EAL: request: mp_malloc_sync 00:17:06.586 EAL: No shared files mode enabled, IPC is disabled 00:17:06.586 EAL: Heap on socket 0 was expanded by 18MB 00:17:06.586 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.586 EAL: request: mp_malloc_sync 00:17:06.586 EAL: No shared files mode enabled, IPC is disabled 00:17:06.586 EAL: Heap on socket 0 was shrunk by 18MB 00:17:06.586 EAL: Trying to obtain current memory policy. 00:17:06.586 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:06.586 EAL: Restoring previous memory policy: 4 00:17:06.586 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.586 EAL: request: mp_malloc_sync 00:17:06.586 EAL: No shared files mode enabled, IPC is disabled 00:17:06.586 EAL: Heap on socket 0 was expanded by 34MB 00:17:06.843 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.843 EAL: request: mp_malloc_sync 00:17:06.843 EAL: No shared files mode enabled, IPC is disabled 00:17:06.843 EAL: Heap on socket 0 was shrunk by 34MB 00:17:06.843 EAL: Trying to obtain current memory policy. 00:17:06.843 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:06.843 EAL: Restoring previous memory policy: 4 00:17:06.843 EAL: Calling mem event callback 'spdk:(nil)' 00:17:06.843 EAL: request: mp_malloc_sync 00:17:06.843 EAL: No shared files mode enabled, IPC is disabled 00:17:06.843 EAL: Heap on socket 0 was expanded by 66MB 00:17:07.100 EAL: Calling mem event callback 'spdk:(nil)' 00:17:07.100 EAL: request: mp_malloc_sync 00:17:07.100 EAL: No shared files mode enabled, IPC is disabled 00:17:07.100 EAL: Heap on socket 0 was shrunk by 66MB 00:17:07.100 EAL: Trying to obtain current memory policy. 00:17:07.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:07.100 EAL: Restoring previous memory policy: 4 00:17:07.100 EAL: Calling mem event callback 'spdk:(nil)' 00:17:07.100 EAL: request: mp_malloc_sync 00:17:07.100 EAL: No shared files mode enabled, IPC is disabled 00:17:07.100 EAL: Heap on socket 0 was expanded by 130MB 00:17:07.358 EAL: Calling mem event callback 'spdk:(nil)' 00:17:07.358 EAL: request: mp_malloc_sync 00:17:07.358 EAL: No shared files mode enabled, IPC is disabled 00:17:07.358 EAL: Heap on socket 0 was shrunk by 130MB 00:17:07.616 EAL: Trying to obtain current memory policy. 00:17:07.616 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:07.874 EAL: Restoring previous memory policy: 4 00:17:07.874 EAL: Calling mem event callback 'spdk:(nil)' 00:17:07.874 EAL: request: mp_malloc_sync 00:17:07.874 EAL: No shared files mode enabled, IPC is disabled 00:17:07.874 EAL: Heap on socket 0 was expanded by 258MB 00:17:08.441 EAL: Calling mem event callback 'spdk:(nil)' 00:17:08.441 EAL: request: mp_malloc_sync 00:17:08.441 EAL: No shared files mode enabled, IPC is disabled 00:17:08.441 EAL: Heap on socket 0 was shrunk by 258MB 00:17:09.006 EAL: Trying to obtain current memory policy. 00:17:09.006 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:09.006 EAL: Restoring previous memory policy: 4 00:17:09.006 EAL: Calling mem event callback 'spdk:(nil)' 00:17:09.006 EAL: request: mp_malloc_sync 00:17:09.006 EAL: No shared files mode enabled, IPC is disabled 00:17:09.006 EAL: Heap on socket 0 was expanded by 514MB 00:17:10.378 EAL: Calling mem event callback 'spdk:(nil)' 00:17:10.378 EAL: request: mp_malloc_sync 00:17:10.378 EAL: No shared files mode enabled, IPC is disabled 00:17:10.378 EAL: Heap on socket 0 was shrunk by 514MB 00:17:11.309 EAL: Trying to obtain current memory policy. 00:17:11.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:17:11.570 EAL: Restoring previous memory policy: 4 00:17:11.570 EAL: Calling mem event callback 'spdk:(nil)' 00:17:11.570 EAL: request: mp_malloc_sync 00:17:11.570 EAL: No shared files mode enabled, IPC is disabled 00:17:11.570 EAL: Heap on socket 0 was expanded by 1026MB 00:17:14.096 EAL: Calling mem event callback 'spdk:(nil)' 00:17:14.096 EAL: request: mp_malloc_sync 00:17:14.096 EAL: No shared files mode enabled, IPC is disabled 00:17:14.096 EAL: Heap on socket 0 was shrunk by 1026MB 00:17:15.993 passed 00:17:15.993 00:17:15.993 Run Summary: Type Total Ran Passed Failed Inactive 00:17:15.993 suites 1 1 n/a 0 0 00:17:15.993 tests 2 2 2 0 0 00:17:15.993 asserts 5866 5866 5866 0 n/a 00:17:15.993 00:17:15.993 Elapsed time = 9.824 seconds 00:17:15.993 EAL: Calling mem event callback 'spdk:(nil)' 00:17:15.993 EAL: request: mp_malloc_sync 00:17:15.993 EAL: No shared files mode enabled, IPC is disabled 00:17:15.993 EAL: Heap on socket 0 was shrunk by 2MB 00:17:15.993 EAL: No shared files mode enabled, IPC is disabled 00:17:15.993 EAL: No shared files mode enabled, IPC is disabled 00:17:15.993 EAL: No shared files mode enabled, IPC is disabled 00:17:15.993 00:17:15.993 real 0m10.169s 00:17:15.993 user 0m9.004s 00:17:15.993 sys 0m0.978s 00:17:15.993 18:44:44 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:15.993 18:44:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:17:15.993 ************************************ 00:17:15.993 END TEST env_vtophys 00:17:15.993 ************************************ 00:17:15.993 18:44:44 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:15.993 18:44:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:15.993 18:44:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:15.993 18:44:44 env -- common/autotest_common.sh@10 -- # set +x 00:17:15.993 ************************************ 00:17:15.993 START TEST env_pci 00:17:15.993 ************************************ 00:17:15.993 18:44:44 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:17:15.993 00:17:15.993 00:17:15.993 CUnit - A unit testing framework for C - Version 2.1-3 00:17:15.993 http://cunit.sourceforge.net/ 00:17:15.993 00:17:15.993 00:17:15.993 Suite: pci 00:17:15.993 Test: pci_hook ...[2024-10-08 18:44:44.676182] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58275 has claimed it 00:17:15.993 passed 00:17:15.993 00:17:15.993 EAL: Cannot find device (10000:00:01.0) 00:17:15.993 EAL: Failed to attach device on primary process 00:17:15.993 Run Summary: Type Total Ran Passed Failed Inactive 00:17:15.993 suites 1 1 n/a 0 0 00:17:15.993 tests 1 1 1 0 0 00:17:15.993 asserts 25 25 25 0 n/a 00:17:15.993 00:17:15.993 Elapsed time = 0.008 seconds 00:17:15.993 00:17:15.993 real 0m0.093s 00:17:15.993 user 0m0.044s 00:17:15.993 sys 0m0.048s 00:17:15.993 18:44:44 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:15.993 18:44:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:17:15.993 ************************************ 00:17:15.993 END TEST env_pci 00:17:15.993 ************************************ 00:17:16.250 18:44:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:17:16.250 18:44:44 env -- env/env.sh@15 -- # uname 00:17:16.250 18:44:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:17:16.250 18:44:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:17:16.250 18:44:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:16.250 18:44:44 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:16.251 18:44:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.251 18:44:44 env -- common/autotest_common.sh@10 -- # set +x 00:17:16.251 ************************************ 00:17:16.251 START TEST env_dpdk_post_init 00:17:16.251 ************************************ 00:17:16.251 18:44:44 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:17:16.251 EAL: Detected CPU lcores: 10 00:17:16.251 EAL: Detected NUMA nodes: 1 00:17:16.251 EAL: Detected shared linkage of DPDK 00:17:16.251 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:16.251 EAL: Selected IOVA mode 'PA' 00:17:16.251 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:16.508 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:17:16.508 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:17:16.508 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:17:16.508 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:17:16.508 Starting DPDK initialization... 00:17:16.508 Starting SPDK post initialization... 00:17:16.508 SPDK NVMe probe 00:17:16.508 Attaching to 0000:00:10.0 00:17:16.508 Attaching to 0000:00:11.0 00:17:16.508 Attaching to 0000:00:12.0 00:17:16.508 Attaching to 0000:00:13.0 00:17:16.508 Attached to 0000:00:10.0 00:17:16.508 Attached to 0000:00:11.0 00:17:16.508 Attached to 0000:00:13.0 00:17:16.508 Attached to 0000:00:12.0 00:17:16.508 Cleaning up... 00:17:16.508 00:17:16.508 real 0m0.321s 00:17:16.508 user 0m0.107s 00:17:16.508 sys 0m0.114s 00:17:16.508 18:44:45 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.508 18:44:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:17:16.508 ************************************ 00:17:16.508 END TEST env_dpdk_post_init 00:17:16.509 ************************************ 00:17:16.509 18:44:45 env -- env/env.sh@26 -- # uname 00:17:16.509 18:44:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:17:16.509 18:44:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:16.509 18:44:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:16.509 18:44:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.509 18:44:45 env -- common/autotest_common.sh@10 -- # set +x 00:17:16.509 ************************************ 00:17:16.509 START TEST env_mem_callbacks 00:17:16.509 ************************************ 00:17:16.509 18:44:45 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:17:16.509 EAL: Detected CPU lcores: 10 00:17:16.509 EAL: Detected NUMA nodes: 1 00:17:16.509 EAL: Detected shared linkage of DPDK 00:17:16.509 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:17:16.509 EAL: Selected IOVA mode 'PA' 00:17:16.766 TELEMETRY: No legacy callbacks, legacy socket not created 00:17:16.766 00:17:16.766 00:17:16.766 CUnit - A unit testing framework for C - Version 2.1-3 00:17:16.766 http://cunit.sourceforge.net/ 00:17:16.766 00:17:16.766 00:17:16.766 Suite: memory 00:17:16.766 Test: test ... 00:17:16.766 register 0x200000200000 2097152 00:17:16.766 malloc 3145728 00:17:16.766 register 0x200000400000 4194304 00:17:16.766 buf 0x2000004fffc0 len 3145728 PASSED 00:17:16.766 malloc 64 00:17:16.766 buf 0x2000004ffec0 len 64 PASSED 00:17:16.766 malloc 4194304 00:17:16.766 register 0x200000800000 6291456 00:17:16.766 buf 0x2000009fffc0 len 4194304 PASSED 00:17:16.766 free 0x2000004fffc0 3145728 00:17:16.766 free 0x2000004ffec0 64 00:17:16.766 unregister 0x200000400000 4194304 PASSED 00:17:16.766 free 0x2000009fffc0 4194304 00:17:16.766 unregister 0x200000800000 6291456 PASSED 00:17:16.766 malloc 8388608 00:17:16.766 register 0x200000400000 10485760 00:17:16.766 buf 0x2000005fffc0 len 8388608 PASSED 00:17:16.766 free 0x2000005fffc0 8388608 00:17:16.766 unregister 0x200000400000 10485760 PASSED 00:17:16.766 passed 00:17:16.766 00:17:16.766 Run Summary: Type Total Ran Passed Failed Inactive 00:17:16.766 suites 1 1 n/a 0 0 00:17:16.766 tests 1 1 1 0 0 00:17:16.766 asserts 15 15 15 0 n/a 00:17:16.766 00:17:16.766 Elapsed time = 0.083 seconds 00:17:16.766 00:17:16.766 real 0m0.313s 00:17:16.766 user 0m0.123s 00:17:16.766 sys 0m0.087s 00:17:16.766 18:44:45 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.766 ************************************ 00:17:16.766 END TEST env_mem_callbacks 00:17:16.766 ************************************ 00:17:16.766 18:44:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:17:16.766 00:17:16.766 real 0m11.642s 00:17:16.766 user 0m9.754s 00:17:16.766 sys 0m1.496s 00:17:16.766 18:44:45 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.766 ************************************ 00:17:16.766 END TEST env 00:17:16.766 ************************************ 00:17:16.766 18:44:45 env -- common/autotest_common.sh@10 -- # set +x 00:17:17.024 18:44:45 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:17.024 18:44:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:17.024 18:44:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.024 18:44:45 -- common/autotest_common.sh@10 -- # set +x 00:17:17.024 ************************************ 00:17:17.024 START TEST rpc 00:17:17.024 ************************************ 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:17:17.024 * Looking for test storage... 00:17:17.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:17.024 18:44:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.024 18:44:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.024 18:44:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.024 18:44:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.024 18:44:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.024 18:44:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.024 18:44:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.024 18:44:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.024 18:44:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.024 18:44:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.024 18:44:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.024 18:44:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:17.024 18:44:45 rpc -- scripts/common.sh@345 -- # : 1 00:17:17.024 18:44:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.024 18:44:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.024 18:44:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:17:17.024 18:44:45 rpc -- scripts/common.sh@353 -- # local d=1 00:17:17.024 18:44:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.024 18:44:45 rpc -- scripts/common.sh@355 -- # echo 1 00:17:17.024 18:44:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.024 18:44:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:17:17.024 18:44:45 rpc -- scripts/common.sh@353 -- # local d=2 00:17:17.024 18:44:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.024 18:44:45 rpc -- scripts/common.sh@355 -- # echo 2 00:17:17.024 18:44:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.024 18:44:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.024 18:44:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.024 18:44:45 rpc -- scripts/common.sh@368 -- # return 0 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:17.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.024 --rc genhtml_branch_coverage=1 00:17:17.024 --rc genhtml_function_coverage=1 00:17:17.024 --rc genhtml_legend=1 00:17:17.024 --rc geninfo_all_blocks=1 00:17:17.024 --rc geninfo_unexecuted_blocks=1 00:17:17.024 00:17:17.024 ' 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:17.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.024 --rc genhtml_branch_coverage=1 00:17:17.024 --rc genhtml_function_coverage=1 00:17:17.024 --rc genhtml_legend=1 00:17:17.024 --rc geninfo_all_blocks=1 00:17:17.024 --rc geninfo_unexecuted_blocks=1 00:17:17.024 00:17:17.024 ' 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:17.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.024 --rc genhtml_branch_coverage=1 00:17:17.024 --rc genhtml_function_coverage=1 00:17:17.024 --rc genhtml_legend=1 00:17:17.024 --rc geninfo_all_blocks=1 00:17:17.024 --rc geninfo_unexecuted_blocks=1 00:17:17.024 00:17:17.024 ' 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:17.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.024 --rc genhtml_branch_coverage=1 00:17:17.024 --rc genhtml_function_coverage=1 00:17:17.024 --rc genhtml_legend=1 00:17:17.024 --rc geninfo_all_blocks=1 00:17:17.024 --rc geninfo_unexecuted_blocks=1 00:17:17.024 00:17:17.024 ' 00:17:17.024 18:44:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58408 00:17:17.024 18:44:45 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:17:17.024 18:44:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:17.024 18:44:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58408 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@831 -- # '[' -z 58408 ']' 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:17.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:17.024 18:44:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:17.281 [2024-10-08 18:44:45.851253] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:17.282 [2024-10-08 18:44:45.851430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58408 ] 00:17:17.282 [2024-10-08 18:44:46.025728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.846 [2024-10-08 18:44:46.297059] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:17:17.846 [2024-10-08 18:44:46.297136] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58408' to capture a snapshot of events at runtime. 00:17:17.846 [2024-10-08 18:44:46.297152] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.846 [2024-10-08 18:44:46.297184] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.846 [2024-10-08 18:44:46.297196] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58408 for offline analysis/debug. 00:17:17.846 [2024-10-08 18:44:46.298729] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.778 18:44:47 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:18.778 18:44:47 rpc -- common/autotest_common.sh@864 -- # return 0 00:17:18.778 18:44:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:18.778 18:44:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:17:18.778 18:44:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:17:18.778 18:44:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:17:18.778 18:44:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:18.778 18:44:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.778 18:44:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.778 ************************************ 00:17:18.778 START TEST rpc_integrity 00:17:18.778 ************************************ 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:17:18.778 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.778 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:18.778 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:18.778 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:18.778 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.778 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:17:18.778 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.778 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.778 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:18.778 { 00:17:18.778 "name": "Malloc0", 00:17:18.778 "aliases": [ 00:17:18.778 "a5bd26f2-9289-498d-a33f-781aab55bf4c" 00:17:18.778 ], 00:17:18.778 "product_name": "Malloc disk", 00:17:18.778 "block_size": 512, 00:17:18.778 "num_blocks": 16384, 00:17:18.778 "uuid": "a5bd26f2-9289-498d-a33f-781aab55bf4c", 00:17:18.778 "assigned_rate_limits": { 00:17:18.778 "rw_ios_per_sec": 0, 00:17:18.778 "rw_mbytes_per_sec": 0, 00:17:18.778 "r_mbytes_per_sec": 0, 00:17:18.778 "w_mbytes_per_sec": 0 00:17:18.778 }, 00:17:18.778 "claimed": false, 00:17:18.778 "zoned": false, 00:17:18.778 "supported_io_types": { 00:17:18.778 "read": true, 00:17:18.778 "write": true, 00:17:18.778 "unmap": true, 00:17:18.779 "flush": true, 00:17:18.779 "reset": true, 00:17:18.779 "nvme_admin": false, 00:17:18.779 "nvme_io": false, 00:17:18.779 "nvme_io_md": false, 00:17:18.779 "write_zeroes": true, 00:17:18.779 "zcopy": true, 00:17:18.779 "get_zone_info": false, 00:17:18.779 "zone_management": false, 00:17:18.779 "zone_append": false, 00:17:18.779 "compare": false, 00:17:18.779 "compare_and_write": false, 00:17:18.779 "abort": true, 00:17:18.779 "seek_hole": false, 00:17:18.779 "seek_data": false, 00:17:18.779 "copy": true, 00:17:18.779 "nvme_iov_md": false 00:17:18.779 }, 00:17:18.779 "memory_domains": [ 00:17:18.779 { 00:17:18.779 "dma_device_id": "system", 00:17:18.779 "dma_device_type": 1 00:17:18.779 }, 00:17:18.779 { 00:17:18.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.779 "dma_device_type": 2 00:17:18.779 } 00:17:18.779 ], 00:17:18.779 "driver_specific": {} 00:17:18.779 } 00:17:18.779 ]' 00:17:18.779 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:18.779 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:18.779 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:17:18.779 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.779 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.779 [2024-10-08 18:44:47.490798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:17:18.779 [2024-10-08 18:44:47.490904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.779 [2024-10-08 18:44:47.490948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:18.779 [2024-10-08 18:44:47.490983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.779 [2024-10-08 18:44:47.494103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.779 [2024-10-08 18:44:47.494171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:18.779 Passthru0 00:17:18.779 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.779 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:18.779 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.779 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:18.779 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.037 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:19.037 { 00:17:19.037 "name": "Malloc0", 00:17:19.037 "aliases": [ 00:17:19.037 "a5bd26f2-9289-498d-a33f-781aab55bf4c" 00:17:19.037 ], 00:17:19.037 "product_name": "Malloc disk", 00:17:19.037 "block_size": 512, 00:17:19.037 "num_blocks": 16384, 00:17:19.037 "uuid": "a5bd26f2-9289-498d-a33f-781aab55bf4c", 00:17:19.037 "assigned_rate_limits": { 00:17:19.037 "rw_ios_per_sec": 0, 00:17:19.037 "rw_mbytes_per_sec": 0, 00:17:19.037 "r_mbytes_per_sec": 0, 00:17:19.037 "w_mbytes_per_sec": 0 00:17:19.037 }, 00:17:19.037 "claimed": true, 00:17:19.037 "claim_type": "exclusive_write", 00:17:19.037 "zoned": false, 00:17:19.037 "supported_io_types": { 00:17:19.037 "read": true, 00:17:19.037 "write": true, 00:17:19.038 "unmap": true, 00:17:19.038 "flush": true, 00:17:19.038 "reset": true, 00:17:19.038 "nvme_admin": false, 00:17:19.038 "nvme_io": false, 00:17:19.038 "nvme_io_md": false, 00:17:19.038 "write_zeroes": true, 00:17:19.038 "zcopy": true, 00:17:19.038 "get_zone_info": false, 00:17:19.038 "zone_management": false, 00:17:19.038 "zone_append": false, 00:17:19.038 "compare": false, 00:17:19.038 "compare_and_write": false, 00:17:19.038 "abort": true, 00:17:19.038 "seek_hole": false, 00:17:19.038 "seek_data": false, 00:17:19.038 "copy": true, 00:17:19.038 "nvme_iov_md": false 00:17:19.038 }, 00:17:19.038 "memory_domains": [ 00:17:19.038 { 00:17:19.038 "dma_device_id": "system", 00:17:19.038 "dma_device_type": 1 00:17:19.038 }, 00:17:19.038 { 00:17:19.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.038 "dma_device_type": 2 00:17:19.038 } 00:17:19.038 ], 00:17:19.038 "driver_specific": {} 00:17:19.038 }, 00:17:19.038 { 00:17:19.038 "name": "Passthru0", 00:17:19.038 "aliases": [ 00:17:19.038 "5970a542-2e48-5915-a4b0-baefe195daf0" 00:17:19.038 ], 00:17:19.038 "product_name": "passthru", 00:17:19.038 "block_size": 512, 00:17:19.038 "num_blocks": 16384, 00:17:19.038 "uuid": "5970a542-2e48-5915-a4b0-baefe195daf0", 00:17:19.038 "assigned_rate_limits": { 00:17:19.038 "rw_ios_per_sec": 0, 00:17:19.038 "rw_mbytes_per_sec": 0, 00:17:19.038 "r_mbytes_per_sec": 0, 00:17:19.038 "w_mbytes_per_sec": 0 00:17:19.038 }, 00:17:19.038 "claimed": false, 00:17:19.038 "zoned": false, 00:17:19.038 "supported_io_types": { 00:17:19.038 "read": true, 00:17:19.038 "write": true, 00:17:19.038 "unmap": true, 00:17:19.038 "flush": true, 00:17:19.038 "reset": true, 00:17:19.038 "nvme_admin": false, 00:17:19.038 "nvme_io": false, 00:17:19.038 "nvme_io_md": false, 00:17:19.038 "write_zeroes": true, 00:17:19.038 "zcopy": true, 00:17:19.038 "get_zone_info": false, 00:17:19.038 "zone_management": false, 00:17:19.038 "zone_append": false, 00:17:19.038 "compare": false, 00:17:19.038 "compare_and_write": false, 00:17:19.038 "abort": true, 00:17:19.038 "seek_hole": false, 00:17:19.038 "seek_data": false, 00:17:19.038 "copy": true, 00:17:19.038 "nvme_iov_md": false 00:17:19.038 }, 00:17:19.038 "memory_domains": [ 00:17:19.038 { 00:17:19.038 "dma_device_id": "system", 00:17:19.038 "dma_device_type": 1 00:17:19.038 }, 00:17:19.038 { 00:17:19.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.038 "dma_device_type": 2 00:17:19.038 } 00:17:19.038 ], 00:17:19.038 "driver_specific": { 00:17:19.038 "passthru": { 00:17:19.038 "name": "Passthru0", 00:17:19.038 "base_bdev_name": "Malloc0" 00:17:19.038 } 00:17:19.038 } 00:17:19.038 } 00:17:19.038 ]' 00:17:19.038 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:19.038 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:19.038 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.038 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.038 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.038 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:19.038 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:19.038 ************************************ 00:17:19.038 END TEST rpc_integrity 00:17:19.038 ************************************ 00:17:19.038 18:44:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:19.038 00:17:19.038 real 0m0.327s 00:17:19.038 user 0m0.166s 00:17:19.038 sys 0m0.050s 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.038 18:44:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.038 18:44:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:19.038 18:44:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:19.038 18:44:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.038 18:44:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.038 ************************************ 00:17:19.038 START TEST rpc_plugins 00:17:19.038 ************************************ 00:17:19.038 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:17:19.038 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:19.038 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.038 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:19.038 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.038 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:19.038 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:19.038 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.038 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:19.038 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.038 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:19.038 { 00:17:19.038 "name": "Malloc1", 00:17:19.038 "aliases": [ 00:17:19.038 "206c6e8f-26db-40fb-aabe-67d68116f7d0" 00:17:19.038 ], 00:17:19.038 "product_name": "Malloc disk", 00:17:19.038 "block_size": 4096, 00:17:19.038 "num_blocks": 256, 00:17:19.038 "uuid": "206c6e8f-26db-40fb-aabe-67d68116f7d0", 00:17:19.038 "assigned_rate_limits": { 00:17:19.038 "rw_ios_per_sec": 0, 00:17:19.038 "rw_mbytes_per_sec": 0, 00:17:19.038 "r_mbytes_per_sec": 0, 00:17:19.038 "w_mbytes_per_sec": 0 00:17:19.038 }, 00:17:19.038 "claimed": false, 00:17:19.038 "zoned": false, 00:17:19.038 "supported_io_types": { 00:17:19.038 "read": true, 00:17:19.038 "write": true, 00:17:19.038 "unmap": true, 00:17:19.038 "flush": true, 00:17:19.038 "reset": true, 00:17:19.038 "nvme_admin": false, 00:17:19.038 "nvme_io": false, 00:17:19.038 "nvme_io_md": false, 00:17:19.038 "write_zeroes": true, 00:17:19.038 "zcopy": true, 00:17:19.038 "get_zone_info": false, 00:17:19.038 "zone_management": false, 00:17:19.038 "zone_append": false, 00:17:19.038 "compare": false, 00:17:19.038 "compare_and_write": false, 00:17:19.038 "abort": true, 00:17:19.038 "seek_hole": false, 00:17:19.038 "seek_data": false, 00:17:19.038 "copy": true, 00:17:19.038 "nvme_iov_md": false 00:17:19.038 }, 00:17:19.038 "memory_domains": [ 00:17:19.038 { 00:17:19.038 "dma_device_id": "system", 00:17:19.038 "dma_device_type": 1 00:17:19.038 }, 00:17:19.038 { 00:17:19.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.038 "dma_device_type": 2 00:17:19.038 } 00:17:19.038 ], 00:17:19.038 "driver_specific": {} 00:17:19.038 } 00:17:19.038 ]' 00:17:19.038 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:17:19.297 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:19.297 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:19.297 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.297 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:19.297 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.297 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:19.297 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.297 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:19.297 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.297 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:19.297 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:17:19.297 ************************************ 00:17:19.297 END TEST rpc_plugins 00:17:19.297 ************************************ 00:17:19.297 18:44:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:17:19.297 00:17:19.297 real 0m0.152s 00:17:19.297 user 0m0.090s 00:17:19.297 sys 0m0.021s 00:17:19.297 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.297 18:44:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:17:19.297 18:44:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:17:19.297 18:44:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:19.297 18:44:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.297 18:44:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.297 ************************************ 00:17:19.297 START TEST rpc_trace_cmd_test 00:17:19.298 ************************************ 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:17:19.298 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58408", 00:17:19.298 "tpoint_group_mask": "0x8", 00:17:19.298 "iscsi_conn": { 00:17:19.298 "mask": "0x2", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "scsi": { 00:17:19.298 "mask": "0x4", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "bdev": { 00:17:19.298 "mask": "0x8", 00:17:19.298 "tpoint_mask": "0xffffffffffffffff" 00:17:19.298 }, 00:17:19.298 "nvmf_rdma": { 00:17:19.298 "mask": "0x10", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "nvmf_tcp": { 00:17:19.298 "mask": "0x20", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "ftl": { 00:17:19.298 "mask": "0x40", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "blobfs": { 00:17:19.298 "mask": "0x80", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "dsa": { 00:17:19.298 "mask": "0x200", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "thread": { 00:17:19.298 "mask": "0x400", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "nvme_pcie": { 00:17:19.298 "mask": "0x800", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "iaa": { 00:17:19.298 "mask": "0x1000", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "nvme_tcp": { 00:17:19.298 "mask": "0x2000", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "bdev_nvme": { 00:17:19.298 "mask": "0x4000", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "sock": { 00:17:19.298 "mask": "0x8000", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "blob": { 00:17:19.298 "mask": "0x10000", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "bdev_raid": { 00:17:19.298 "mask": "0x20000", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 }, 00:17:19.298 "scheduler": { 00:17:19.298 "mask": "0x40000", 00:17:19.298 "tpoint_mask": "0x0" 00:17:19.298 } 00:17:19.298 }' 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:17:19.298 18:44:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:17:19.298 18:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:17:19.298 18:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:17:19.556 18:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:17:19.556 18:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:17:19.556 18:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:17:19.556 18:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:17:19.556 ************************************ 00:17:19.556 END TEST rpc_trace_cmd_test 00:17:19.556 ************************************ 00:17:19.556 18:44:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:17:19.556 00:17:19.556 real 0m0.243s 00:17:19.556 user 0m0.204s 00:17:19.556 sys 0m0.029s 00:17:19.556 18:44:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.556 18:44:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.556 18:44:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:17:19.556 18:44:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:17:19.557 18:44:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:17:19.557 18:44:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:19.557 18:44:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.557 18:44:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.557 ************************************ 00:17:19.557 START TEST rpc_daemon_integrity 00:17:19.557 ************************************ 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.557 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:19.815 { 00:17:19.815 "name": "Malloc2", 00:17:19.815 "aliases": [ 00:17:19.815 "f72996db-003c-4627-8857-92f44f6a2a12" 00:17:19.815 ], 00:17:19.815 "product_name": "Malloc disk", 00:17:19.815 "block_size": 512, 00:17:19.815 "num_blocks": 16384, 00:17:19.815 "uuid": "f72996db-003c-4627-8857-92f44f6a2a12", 00:17:19.815 "assigned_rate_limits": { 00:17:19.815 "rw_ios_per_sec": 0, 00:17:19.815 "rw_mbytes_per_sec": 0, 00:17:19.815 "r_mbytes_per_sec": 0, 00:17:19.815 "w_mbytes_per_sec": 0 00:17:19.815 }, 00:17:19.815 "claimed": false, 00:17:19.815 "zoned": false, 00:17:19.815 "supported_io_types": { 00:17:19.815 "read": true, 00:17:19.815 "write": true, 00:17:19.815 "unmap": true, 00:17:19.815 "flush": true, 00:17:19.815 "reset": true, 00:17:19.815 "nvme_admin": false, 00:17:19.815 "nvme_io": false, 00:17:19.815 "nvme_io_md": false, 00:17:19.815 "write_zeroes": true, 00:17:19.815 "zcopy": true, 00:17:19.815 "get_zone_info": false, 00:17:19.815 "zone_management": false, 00:17:19.815 "zone_append": false, 00:17:19.815 "compare": false, 00:17:19.815 "compare_and_write": false, 00:17:19.815 "abort": true, 00:17:19.815 "seek_hole": false, 00:17:19.815 "seek_data": false, 00:17:19.815 "copy": true, 00:17:19.815 "nvme_iov_md": false 00:17:19.815 }, 00:17:19.815 "memory_domains": [ 00:17:19.815 { 00:17:19.815 "dma_device_id": "system", 00:17:19.815 "dma_device_type": 1 00:17:19.815 }, 00:17:19.815 { 00:17:19.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.815 "dma_device_type": 2 00:17:19.815 } 00:17:19.815 ], 00:17:19.815 "driver_specific": {} 00:17:19.815 } 00:17:19.815 ]' 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.815 [2024-10-08 18:44:48.374822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:17:19.815 [2024-10-08 18:44:48.374905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.815 [2024-10-08 18:44:48.374934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:19.815 [2024-10-08 18:44:48.374951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.815 [2024-10-08 18:44:48.377893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.815 [2024-10-08 18:44:48.377942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:19.815 Passthru0 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:19.815 { 00:17:19.815 "name": "Malloc2", 00:17:19.815 "aliases": [ 00:17:19.815 "f72996db-003c-4627-8857-92f44f6a2a12" 00:17:19.815 ], 00:17:19.815 "product_name": "Malloc disk", 00:17:19.815 "block_size": 512, 00:17:19.815 "num_blocks": 16384, 00:17:19.815 "uuid": "f72996db-003c-4627-8857-92f44f6a2a12", 00:17:19.815 "assigned_rate_limits": { 00:17:19.815 "rw_ios_per_sec": 0, 00:17:19.815 "rw_mbytes_per_sec": 0, 00:17:19.815 "r_mbytes_per_sec": 0, 00:17:19.815 "w_mbytes_per_sec": 0 00:17:19.815 }, 00:17:19.815 "claimed": true, 00:17:19.815 "claim_type": "exclusive_write", 00:17:19.815 "zoned": false, 00:17:19.815 "supported_io_types": { 00:17:19.815 "read": true, 00:17:19.815 "write": true, 00:17:19.815 "unmap": true, 00:17:19.815 "flush": true, 00:17:19.815 "reset": true, 00:17:19.815 "nvme_admin": false, 00:17:19.815 "nvme_io": false, 00:17:19.815 "nvme_io_md": false, 00:17:19.815 "write_zeroes": true, 00:17:19.815 "zcopy": true, 00:17:19.815 "get_zone_info": false, 00:17:19.815 "zone_management": false, 00:17:19.815 "zone_append": false, 00:17:19.815 "compare": false, 00:17:19.815 "compare_and_write": false, 00:17:19.815 "abort": true, 00:17:19.815 "seek_hole": false, 00:17:19.815 "seek_data": false, 00:17:19.815 "copy": true, 00:17:19.815 "nvme_iov_md": false 00:17:19.815 }, 00:17:19.815 "memory_domains": [ 00:17:19.815 { 00:17:19.815 "dma_device_id": "system", 00:17:19.815 "dma_device_type": 1 00:17:19.815 }, 00:17:19.815 { 00:17:19.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.815 "dma_device_type": 2 00:17:19.815 } 00:17:19.815 ], 00:17:19.815 "driver_specific": {} 00:17:19.815 }, 00:17:19.815 { 00:17:19.815 "name": "Passthru0", 00:17:19.815 "aliases": [ 00:17:19.815 "31c9224c-1521-5865-8a1d-93a07ec42dd8" 00:17:19.815 ], 00:17:19.815 "product_name": "passthru", 00:17:19.815 "block_size": 512, 00:17:19.815 "num_blocks": 16384, 00:17:19.815 "uuid": "31c9224c-1521-5865-8a1d-93a07ec42dd8", 00:17:19.815 "assigned_rate_limits": { 00:17:19.815 "rw_ios_per_sec": 0, 00:17:19.815 "rw_mbytes_per_sec": 0, 00:17:19.815 "r_mbytes_per_sec": 0, 00:17:19.815 "w_mbytes_per_sec": 0 00:17:19.815 }, 00:17:19.815 "claimed": false, 00:17:19.815 "zoned": false, 00:17:19.815 "supported_io_types": { 00:17:19.815 "read": true, 00:17:19.815 "write": true, 00:17:19.815 "unmap": true, 00:17:19.815 "flush": true, 00:17:19.815 "reset": true, 00:17:19.815 "nvme_admin": false, 00:17:19.815 "nvme_io": false, 00:17:19.815 "nvme_io_md": false, 00:17:19.815 "write_zeroes": true, 00:17:19.815 "zcopy": true, 00:17:19.815 "get_zone_info": false, 00:17:19.815 "zone_management": false, 00:17:19.815 "zone_append": false, 00:17:19.815 "compare": false, 00:17:19.815 "compare_and_write": false, 00:17:19.815 "abort": true, 00:17:19.815 "seek_hole": false, 00:17:19.815 "seek_data": false, 00:17:19.815 "copy": true, 00:17:19.815 "nvme_iov_md": false 00:17:19.815 }, 00:17:19.815 "memory_domains": [ 00:17:19.815 { 00:17:19.815 "dma_device_id": "system", 00:17:19.815 "dma_device_type": 1 00:17:19.815 }, 00:17:19.815 { 00:17:19.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.815 "dma_device_type": 2 00:17:19.815 } 00:17:19.815 ], 00:17:19.815 "driver_specific": { 00:17:19.815 "passthru": { 00:17:19.815 "name": "Passthru0", 00:17:19.815 "base_bdev_name": "Malloc2" 00:17:19.815 } 00:17:19.815 } 00:17:19.815 } 00:17:19.815 ]' 00:17:19.815 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:19.816 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:17:20.074 ************************************ 00:17:20.074 END TEST rpc_daemon_integrity 00:17:20.074 ************************************ 00:17:20.074 18:44:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:20.074 00:17:20.074 real 0m0.347s 00:17:20.074 user 0m0.184s 00:17:20.074 sys 0m0.055s 00:17:20.074 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.074 18:44:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:17:20.074 18:44:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:20.074 18:44:48 rpc -- rpc/rpc.sh@84 -- # killprocess 58408 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@950 -- # '[' -z 58408 ']' 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@954 -- # kill -0 58408 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@955 -- # uname 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58408 00:17:20.074 killing process with pid 58408 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58408' 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@969 -- # kill 58408 00:17:20.074 18:44:48 rpc -- common/autotest_common.sh@974 -- # wait 58408 00:17:23.357 00:17:23.357 real 0m6.277s 00:17:23.357 user 0m6.802s 00:17:23.357 sys 0m0.881s 00:17:23.357 18:44:51 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.357 18:44:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.357 ************************************ 00:17:23.357 END TEST rpc 00:17:23.357 ************************************ 00:17:23.357 18:44:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:23.357 18:44:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:23.357 18:44:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.357 18:44:51 -- common/autotest_common.sh@10 -- # set +x 00:17:23.357 ************************************ 00:17:23.357 START TEST skip_rpc 00:17:23.357 ************************************ 00:17:23.357 18:44:51 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:23.357 * Looking for test storage... 00:17:23.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:23.357 18:44:51 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:23.357 18:44:51 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:23.357 18:44:51 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:23.357 18:44:52 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@345 -- # : 1 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.357 18:44:52 skip_rpc -- scripts/common.sh@368 -- # return 0 00:17:23.357 18:44:52 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.357 18:44:52 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.357 --rc genhtml_branch_coverage=1 00:17:23.357 --rc genhtml_function_coverage=1 00:17:23.357 --rc genhtml_legend=1 00:17:23.357 --rc geninfo_all_blocks=1 00:17:23.357 --rc geninfo_unexecuted_blocks=1 00:17:23.357 00:17:23.357 ' 00:17:23.357 18:44:52 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.357 --rc genhtml_branch_coverage=1 00:17:23.357 --rc genhtml_function_coverage=1 00:17:23.357 --rc genhtml_legend=1 00:17:23.357 --rc geninfo_all_blocks=1 00:17:23.357 --rc geninfo_unexecuted_blocks=1 00:17:23.357 00:17:23.357 ' 00:17:23.357 18:44:52 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:23.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.357 --rc genhtml_branch_coverage=1 00:17:23.357 --rc genhtml_function_coverage=1 00:17:23.357 --rc genhtml_legend=1 00:17:23.358 --rc geninfo_all_blocks=1 00:17:23.358 --rc geninfo_unexecuted_blocks=1 00:17:23.358 00:17:23.358 ' 00:17:23.358 18:44:52 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.358 --rc genhtml_branch_coverage=1 00:17:23.358 --rc genhtml_function_coverage=1 00:17:23.358 --rc genhtml_legend=1 00:17:23.358 --rc geninfo_all_blocks=1 00:17:23.358 --rc geninfo_unexecuted_blocks=1 00:17:23.358 00:17:23.358 ' 00:17:23.358 18:44:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:23.358 18:44:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:23.358 18:44:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:17:23.358 18:44:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:23.358 18:44:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.358 18:44:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.358 ************************************ 00:17:23.358 START TEST skip_rpc 00:17:23.358 ************************************ 00:17:23.358 18:44:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:17:23.358 18:44:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58648 00:17:23.358 18:44:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:23.358 18:44:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:17:23.358 18:44:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:17:23.616 [2024-10-08 18:44:52.191156] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:23.616 [2024-10-08 18:44:52.191344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58648 ] 00:17:23.616 [2024-10-08 18:44:52.363470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.876 [2024-10-08 18:44:52.611667] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58648 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58648 ']' 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58648 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58648 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.137 killing process with pid 58648 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58648' 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58648 00:17:29.137 18:44:57 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58648 00:17:31.667 00:17:31.667 real 0m8.076s 00:17:31.667 user 0m7.524s 00:17:31.667 sys 0m0.440s 00:17:31.667 ************************************ 00:17:31.667 END TEST skip_rpc 00:17:31.667 ************************************ 00:17:31.667 18:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:31.667 18:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.667 18:45:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:17:31.667 18:45:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:31.667 18:45:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:31.667 18:45:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.667 ************************************ 00:17:31.667 START TEST skip_rpc_with_json 00:17:31.667 ************************************ 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58763 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58763 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58763 ']' 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.667 18:45:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:31.667 [2024-10-08 18:45:00.356595] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:31.667 [2024-10-08 18:45:00.356803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58763 ] 00:17:31.924 [2024-10-08 18:45:00.549249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.182 [2024-10-08 18:45:00.890033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:33.555 [2024-10-08 18:45:01.924428] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:17:33.555 request: 00:17:33.555 { 00:17:33.555 "trtype": "tcp", 00:17:33.555 "method": "nvmf_get_transports", 00:17:33.555 "req_id": 1 00:17:33.555 } 00:17:33.555 Got JSON-RPC error response 00:17:33.555 response: 00:17:33.555 { 00:17:33.555 "code": -19, 00:17:33.555 "message": "No such device" 00:17:33.555 } 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:33.555 [2024-10-08 18:45:01.936592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.555 18:45:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:33.555 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.555 18:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:33.555 { 00:17:33.555 "subsystems": [ 00:17:33.555 { 00:17:33.555 "subsystem": "fsdev", 00:17:33.555 "config": [ 00:17:33.555 { 00:17:33.555 "method": "fsdev_set_opts", 00:17:33.555 "params": { 00:17:33.555 "fsdev_io_pool_size": 65535, 00:17:33.555 "fsdev_io_cache_size": 256 00:17:33.555 } 00:17:33.555 } 00:17:33.555 ] 00:17:33.555 }, 00:17:33.555 { 00:17:33.555 "subsystem": "keyring", 00:17:33.555 "config": [] 00:17:33.555 }, 00:17:33.555 { 00:17:33.555 "subsystem": "iobuf", 00:17:33.555 "config": [ 00:17:33.555 { 00:17:33.555 "method": "iobuf_set_options", 00:17:33.555 "params": { 00:17:33.555 "small_pool_count": 8192, 00:17:33.555 "large_pool_count": 1024, 00:17:33.555 "small_bufsize": 8192, 00:17:33.555 "large_bufsize": 135168 00:17:33.555 } 00:17:33.555 } 00:17:33.555 ] 00:17:33.555 }, 00:17:33.555 { 00:17:33.555 "subsystem": "sock", 00:17:33.555 "config": [ 00:17:33.555 { 00:17:33.555 "method": "sock_set_default_impl", 00:17:33.555 "params": { 00:17:33.555 "impl_name": "posix" 00:17:33.555 } 00:17:33.555 }, 00:17:33.555 { 00:17:33.555 "method": "sock_impl_set_options", 00:17:33.555 "params": { 00:17:33.555 "impl_name": "ssl", 00:17:33.555 "recv_buf_size": 4096, 00:17:33.555 "send_buf_size": 4096, 00:17:33.555 "enable_recv_pipe": true, 00:17:33.555 "enable_quickack": false, 00:17:33.555 "enable_placement_id": 0, 00:17:33.555 "enable_zerocopy_send_server": true, 00:17:33.555 "enable_zerocopy_send_client": false, 00:17:33.555 "zerocopy_threshold": 0, 00:17:33.555 "tls_version": 0, 00:17:33.555 "enable_ktls": false 00:17:33.555 } 00:17:33.555 }, 00:17:33.555 { 00:17:33.555 "method": "sock_impl_set_options", 00:17:33.555 "params": { 00:17:33.555 "impl_name": "posix", 00:17:33.555 "recv_buf_size": 2097152, 00:17:33.555 "send_buf_size": 2097152, 00:17:33.555 "enable_recv_pipe": true, 00:17:33.555 "enable_quickack": false, 00:17:33.555 "enable_placement_id": 0, 00:17:33.555 "enable_zerocopy_send_server": true, 00:17:33.555 "enable_zerocopy_send_client": false, 00:17:33.555 "zerocopy_threshold": 0, 00:17:33.555 "tls_version": 0, 00:17:33.555 "enable_ktls": false 00:17:33.555 } 00:17:33.555 } 00:17:33.555 ] 00:17:33.555 }, 00:17:33.555 { 00:17:33.555 "subsystem": "vmd", 00:17:33.555 "config": [] 00:17:33.555 }, 00:17:33.555 { 00:17:33.555 "subsystem": "accel", 00:17:33.555 "config": [ 00:17:33.555 { 00:17:33.555 "method": "accel_set_options", 00:17:33.555 "params": { 00:17:33.555 "small_cache_size": 128, 00:17:33.555 "large_cache_size": 16, 00:17:33.555 "task_count": 2048, 00:17:33.555 "sequence_count": 2048, 00:17:33.555 "buf_count": 2048 00:17:33.555 } 00:17:33.556 } 00:17:33.556 ] 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "bdev", 00:17:33.556 "config": [ 00:17:33.556 { 00:17:33.556 "method": "bdev_set_options", 00:17:33.556 "params": { 00:17:33.556 "bdev_io_pool_size": 65535, 00:17:33.556 "bdev_io_cache_size": 256, 00:17:33.556 "bdev_auto_examine": true, 00:17:33.556 "iobuf_small_cache_size": 128, 00:17:33.556 "iobuf_large_cache_size": 16 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "method": "bdev_raid_set_options", 00:17:33.556 "params": { 00:17:33.556 "process_window_size_kb": 1024, 00:17:33.556 "process_max_bandwidth_mb_sec": 0 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "method": "bdev_iscsi_set_options", 00:17:33.556 "params": { 00:17:33.556 "timeout_sec": 30 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "method": "bdev_nvme_set_options", 00:17:33.556 "params": { 00:17:33.556 "action_on_timeout": "none", 00:17:33.556 "timeout_us": 0, 00:17:33.556 "timeout_admin_us": 0, 00:17:33.556 "keep_alive_timeout_ms": 10000, 00:17:33.556 "arbitration_burst": 0, 00:17:33.556 "low_priority_weight": 0, 00:17:33.556 "medium_priority_weight": 0, 00:17:33.556 "high_priority_weight": 0, 00:17:33.556 "nvme_adminq_poll_period_us": 10000, 00:17:33.556 "nvme_ioq_poll_period_us": 0, 00:17:33.556 "io_queue_requests": 0, 00:17:33.556 "delay_cmd_submit": true, 00:17:33.556 "transport_retry_count": 4, 00:17:33.556 "bdev_retry_count": 3, 00:17:33.556 "transport_ack_timeout": 0, 00:17:33.556 "ctrlr_loss_timeout_sec": 0, 00:17:33.556 "reconnect_delay_sec": 0, 00:17:33.556 "fast_io_fail_timeout_sec": 0, 00:17:33.556 "disable_auto_failback": false, 00:17:33.556 "generate_uuids": false, 00:17:33.556 "transport_tos": 0, 00:17:33.556 "nvme_error_stat": false, 00:17:33.556 "rdma_srq_size": 0, 00:17:33.556 "io_path_stat": false, 00:17:33.556 "allow_accel_sequence": false, 00:17:33.556 "rdma_max_cq_size": 0, 00:17:33.556 "rdma_cm_event_timeout_ms": 0, 00:17:33.556 "dhchap_digests": [ 00:17:33.556 "sha256", 00:17:33.556 "sha384", 00:17:33.556 "sha512" 00:17:33.556 ], 00:17:33.556 "dhchap_dhgroups": [ 00:17:33.556 "null", 00:17:33.556 "ffdhe2048", 00:17:33.556 "ffdhe3072", 00:17:33.556 "ffdhe4096", 00:17:33.556 "ffdhe6144", 00:17:33.556 "ffdhe8192" 00:17:33.556 ] 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "method": "bdev_nvme_set_hotplug", 00:17:33.556 "params": { 00:17:33.556 "period_us": 100000, 00:17:33.556 "enable": false 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "method": "bdev_wait_for_examine" 00:17:33.556 } 00:17:33.556 ] 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "scsi", 00:17:33.556 "config": null 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "scheduler", 00:17:33.556 "config": [ 00:17:33.556 { 00:17:33.556 "method": "framework_set_scheduler", 00:17:33.556 "params": { 00:17:33.556 "name": "static" 00:17:33.556 } 00:17:33.556 } 00:17:33.556 ] 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "vhost_scsi", 00:17:33.556 "config": [] 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "vhost_blk", 00:17:33.556 "config": [] 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "ublk", 00:17:33.556 "config": [] 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "nbd", 00:17:33.556 "config": [] 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "nvmf", 00:17:33.556 "config": [ 00:17:33.556 { 00:17:33.556 "method": "nvmf_set_config", 00:17:33.556 "params": { 00:17:33.556 "discovery_filter": "match_any", 00:17:33.556 "admin_cmd_passthru": { 00:17:33.556 "identify_ctrlr": false 00:17:33.556 }, 00:17:33.556 "dhchap_digests": [ 00:17:33.556 "sha256", 00:17:33.556 "sha384", 00:17:33.556 "sha512" 00:17:33.556 ], 00:17:33.556 "dhchap_dhgroups": [ 00:17:33.556 "null", 00:17:33.556 "ffdhe2048", 00:17:33.556 "ffdhe3072", 00:17:33.556 "ffdhe4096", 00:17:33.556 "ffdhe6144", 00:17:33.556 "ffdhe8192" 00:17:33.556 ] 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "method": "nvmf_set_max_subsystems", 00:17:33.556 "params": { 00:17:33.556 "max_subsystems": 1024 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "method": "nvmf_set_crdt", 00:17:33.556 "params": { 00:17:33.556 "crdt1": 0, 00:17:33.556 "crdt2": 0, 00:17:33.556 "crdt3": 0 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "method": "nvmf_create_transport", 00:17:33.556 "params": { 00:17:33.556 "trtype": "TCP", 00:17:33.556 "max_queue_depth": 128, 00:17:33.556 "max_io_qpairs_per_ctrlr": 127, 00:17:33.556 "in_capsule_data_size": 4096, 00:17:33.556 "max_io_size": 131072, 00:17:33.556 "io_unit_size": 131072, 00:17:33.556 "max_aq_depth": 128, 00:17:33.556 "num_shared_buffers": 511, 00:17:33.556 "buf_cache_size": 4294967295, 00:17:33.556 "dif_insert_or_strip": false, 00:17:33.556 "zcopy": false, 00:17:33.556 "c2h_success": true, 00:17:33.556 "sock_priority": 0, 00:17:33.556 "abort_timeout_sec": 1, 00:17:33.556 "ack_timeout": 0, 00:17:33.556 "data_wr_pool_size": 0 00:17:33.556 } 00:17:33.556 } 00:17:33.556 ] 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "subsystem": "iscsi", 00:17:33.556 "config": [ 00:17:33.556 { 00:17:33.556 "method": "iscsi_set_options", 00:17:33.556 "params": { 00:17:33.556 "node_base": "iqn.2016-06.io.spdk", 00:17:33.556 "max_sessions": 128, 00:17:33.556 "max_connections_per_session": 2, 00:17:33.556 "max_queue_depth": 64, 00:17:33.556 "default_time2wait": 2, 00:17:33.556 "default_time2retain": 20, 00:17:33.556 "first_burst_length": 8192, 00:17:33.556 "immediate_data": true, 00:17:33.556 "allow_duplicated_isid": false, 00:17:33.556 "error_recovery_level": 0, 00:17:33.556 "nop_timeout": 60, 00:17:33.556 "nop_in_interval": 30, 00:17:33.556 "disable_chap": false, 00:17:33.556 "require_chap": false, 00:17:33.556 "mutual_chap": false, 00:17:33.556 "chap_group": 0, 00:17:33.556 "max_large_datain_per_connection": 64, 00:17:33.556 "max_r2t_per_connection": 4, 00:17:33.556 "pdu_pool_size": 36864, 00:17:33.556 "immediate_data_pool_size": 16384, 00:17:33.556 "data_out_pool_size": 2048 00:17:33.556 } 00:17:33.556 } 00:17:33.556 ] 00:17:33.556 } 00:17:33.556 ] 00:17:33.556 } 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58763 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58763 ']' 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58763 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58763 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:33.556 killing process with pid 58763 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58763' 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58763 00:17:33.556 18:45:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58763 00:17:36.838 18:45:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58830 00:17:36.838 18:45:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:36.838 18:45:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58830 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58830 ']' 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58830 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58830 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:42.134 killing process with pid 58830 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58830' 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58830 00:17:42.134 18:45:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58830 00:17:44.698 18:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:44.698 18:45:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:44.698 00:17:44.698 real 0m13.118s 00:17:44.698 user 0m12.570s 00:17:44.698 sys 0m1.032s 00:17:44.698 18:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.698 18:45:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:17:44.698 ************************************ 00:17:44.698 END TEST skip_rpc_with_json 00:17:44.698 ************************************ 00:17:44.698 18:45:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:17:44.699 18:45:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:44.699 18:45:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.699 18:45:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.699 ************************************ 00:17:44.699 START TEST skip_rpc_with_delay 00:17:44.699 ************************************ 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:44.699 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:44.957 [2024-10-08 18:45:13.517599] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:17:44.957 [2024-10-08 18:45:13.517823] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:17:44.957 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:17:44.957 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.957 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.957 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.957 00:17:44.957 real 0m0.218s 00:17:44.958 user 0m0.120s 00:17:44.958 sys 0m0.096s 00:17:44.958 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:44.958 18:45:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:17:44.958 ************************************ 00:17:44.958 END TEST skip_rpc_with_delay 00:17:44.958 ************************************ 00:17:44.958 18:45:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:17:44.958 18:45:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:17:44.958 18:45:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:17:44.958 18:45:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:44.958 18:45:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:44.958 18:45:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:44.958 ************************************ 00:17:44.958 START TEST exit_on_failed_rpc_init 00:17:44.958 ************************************ 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58969 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58969 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58969 ']' 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.958 18:45:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:45.217 [2024-10-08 18:45:13.806485] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:45.217 [2024-10-08 18:45:13.806690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58969 ] 00:17:45.485 [2024-10-08 18:45:13.979649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.742 [2024-10-08 18:45:14.255350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.678 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.678 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:46.679 18:45:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:46.937 [2024-10-08 18:45:15.454493] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:46.937 [2024-10-08 18:45:15.454657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58987 ] 00:17:46.937 [2024-10-08 18:45:15.643782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.504 [2024-10-08 18:45:16.090526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.504 [2024-10-08 18:45:16.090683] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:47.504 [2024-10-08 18:45:16.090709] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:47.504 [2024-10-08 18:45:16.090732] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58969 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58969 ']' 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58969 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58969 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:48.072 killing process with pid 58969 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58969' 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58969 00:17:48.072 18:45:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58969 00:17:51.353 00:17:51.353 real 0m6.086s 00:17:51.353 user 0m6.954s 00:17:51.353 sys 0m0.757s 00:17:51.353 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.353 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:51.353 ************************************ 00:17:51.353 END TEST exit_on_failed_rpc_init 00:17:51.353 ************************************ 00:17:51.353 18:45:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:51.353 00:17:51.353 real 0m27.914s 00:17:51.353 user 0m27.348s 00:17:51.353 sys 0m2.559s 00:17:51.353 18:45:19 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.353 18:45:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.353 ************************************ 00:17:51.353 END TEST skip_rpc 00:17:51.353 ************************************ 00:17:51.354 18:45:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:51.354 18:45:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:51.354 18:45:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.354 18:45:19 -- common/autotest_common.sh@10 -- # set +x 00:17:51.354 ************************************ 00:17:51.354 START TEST rpc_client 00:17:51.354 ************************************ 00:17:51.354 18:45:19 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:51.354 * Looking for test storage... 00:17:51.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:17:51.354 18:45:19 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:51.354 18:45:19 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.354 18:45:19 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:51.354 18:45:20 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.354 18:45:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:17:51.354 18:45:20 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.354 18:45:20 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.354 --rc genhtml_branch_coverage=1 00:17:51.354 --rc genhtml_function_coverage=1 00:17:51.354 --rc genhtml_legend=1 00:17:51.354 --rc geninfo_all_blocks=1 00:17:51.354 --rc geninfo_unexecuted_blocks=1 00:17:51.354 00:17:51.354 ' 00:17:51.354 18:45:20 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.354 --rc genhtml_branch_coverage=1 00:17:51.354 --rc genhtml_function_coverage=1 00:17:51.354 --rc genhtml_legend=1 00:17:51.354 --rc geninfo_all_blocks=1 00:17:51.354 --rc geninfo_unexecuted_blocks=1 00:17:51.354 00:17:51.354 ' 00:17:51.354 18:45:20 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.354 --rc genhtml_branch_coverage=1 00:17:51.354 --rc genhtml_function_coverage=1 00:17:51.354 --rc genhtml_legend=1 00:17:51.354 --rc geninfo_all_blocks=1 00:17:51.354 --rc geninfo_unexecuted_blocks=1 00:17:51.354 00:17:51.354 ' 00:17:51.354 18:45:20 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:51.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.354 --rc genhtml_branch_coverage=1 00:17:51.354 --rc genhtml_function_coverage=1 00:17:51.354 --rc genhtml_legend=1 00:17:51.354 --rc geninfo_all_blocks=1 00:17:51.354 --rc geninfo_unexecuted_blocks=1 00:17:51.354 00:17:51.354 ' 00:17:51.354 18:45:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:17:51.354 OK 00:17:51.611 18:45:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:17:51.611 00:17:51.611 real 0m0.297s 00:17:51.611 user 0m0.179s 00:17:51.611 sys 0m0.129s 00:17:51.611 18:45:20 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.611 ************************************ 00:17:51.611 END TEST rpc_client 00:17:51.611 ************************************ 00:17:51.611 18:45:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:17:51.611 18:45:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:51.611 18:45:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:51.611 18:45:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.611 18:45:20 -- common/autotest_common.sh@10 -- # set +x 00:17:51.611 ************************************ 00:17:51.611 START TEST json_config 00:17:51.611 ************************************ 00:17:51.611 18:45:20 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:51.611 18:45:20 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:51.611 18:45:20 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:51.611 18:45:20 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.869 18:45:20 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:51.869 18:45:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.869 18:45:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.869 18:45:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.869 18:45:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.869 18:45:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.869 18:45:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.869 18:45:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.869 18:45:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.869 18:45:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.869 18:45:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.869 18:45:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.869 18:45:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:17:51.869 18:45:20 json_config -- scripts/common.sh@345 -- # : 1 00:17:51.869 18:45:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.869 18:45:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.869 18:45:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:17:51.869 18:45:20 json_config -- scripts/common.sh@353 -- # local d=1 00:17:51.869 18:45:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.869 18:45:20 json_config -- scripts/common.sh@355 -- # echo 1 00:17:51.869 18:45:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.869 18:45:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:17:51.869 18:45:20 json_config -- scripts/common.sh@353 -- # local d=2 00:17:51.869 18:45:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.869 18:45:20 json_config -- scripts/common.sh@355 -- # echo 2 00:17:51.869 18:45:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.869 18:45:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.869 18:45:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.869 18:45:20 json_config -- scripts/common.sh@368 -- # return 0 00:17:51.869 18:45:20 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.869 18:45:20 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:51.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.869 --rc genhtml_branch_coverage=1 00:17:51.869 --rc genhtml_function_coverage=1 00:17:51.869 --rc genhtml_legend=1 00:17:51.869 --rc geninfo_all_blocks=1 00:17:51.869 --rc geninfo_unexecuted_blocks=1 00:17:51.869 00:17:51.869 ' 00:17:51.869 18:45:20 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:51.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.869 --rc genhtml_branch_coverage=1 00:17:51.869 --rc genhtml_function_coverage=1 00:17:51.869 --rc genhtml_legend=1 00:17:51.869 --rc geninfo_all_blocks=1 00:17:51.869 --rc geninfo_unexecuted_blocks=1 00:17:51.869 00:17:51.869 ' 00:17:51.869 18:45:20 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:51.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.869 --rc genhtml_branch_coverage=1 00:17:51.869 --rc genhtml_function_coverage=1 00:17:51.869 --rc genhtml_legend=1 00:17:51.869 --rc geninfo_all_blocks=1 00:17:51.869 --rc geninfo_unexecuted_blocks=1 00:17:51.869 00:17:51.869 ' 00:17:51.869 18:45:20 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:51.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.869 --rc genhtml_branch_coverage=1 00:17:51.869 --rc genhtml_function_coverage=1 00:17:51.869 --rc genhtml_legend=1 00:17:51.869 --rc geninfo_all_blocks=1 00:17:51.869 --rc geninfo_unexecuted_blocks=1 00:17:51.869 00:17:51.869 ' 00:17:51.870 18:45:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b30ccf6-f6d8-4ff4-85d2-d61da9ea3b67 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2b30ccf6-f6d8-4ff4-85d2-d61da9ea3b67 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.870 18:45:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.870 18:45:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.870 18:45:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.870 18:45:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.870 18:45:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.870 18:45:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.870 18:45:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.870 18:45:20 json_config -- paths/export.sh@5 -- # export PATH 00:17:51.870 18:45:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@51 -- # : 0 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.870 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.870 18:45:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.870 18:45:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:51.870 18:45:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:17:51.870 18:45:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:17:51.870 18:45:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:17:51.870 18:45:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:17:51.870 WARNING: No tests are enabled so not running JSON configuration tests 00:17:51.870 18:45:20 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:17:51.870 18:45:20 json_config -- json_config/json_config.sh@28 -- # exit 0 00:17:51.870 00:17:51.870 real 0m0.231s 00:17:51.870 user 0m0.162s 00:17:51.870 sys 0m0.076s 00:17:51.870 18:45:20 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.870 18:45:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:51.870 ************************************ 00:17:51.870 END TEST json_config 00:17:51.870 ************************************ 00:17:51.870 18:45:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:51.870 18:45:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:51.870 18:45:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.870 18:45:20 -- common/autotest_common.sh@10 -- # set +x 00:17:51.870 ************************************ 00:17:51.870 START TEST json_config_extra_key 00:17:51.870 ************************************ 00:17:51.870 18:45:20 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:51.870 18:45:20 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:51.870 18:45:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.870 18:45:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.130 18:45:20 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:17:52.130 18:45:20 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.130 18:45:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.130 --rc genhtml_branch_coverage=1 00:17:52.130 --rc genhtml_function_coverage=1 00:17:52.130 --rc genhtml_legend=1 00:17:52.130 --rc geninfo_all_blocks=1 00:17:52.130 --rc geninfo_unexecuted_blocks=1 00:17:52.130 00:17:52.130 ' 00:17:52.130 18:45:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.130 --rc genhtml_branch_coverage=1 00:17:52.130 --rc genhtml_function_coverage=1 00:17:52.130 --rc genhtml_legend=1 00:17:52.130 --rc geninfo_all_blocks=1 00:17:52.130 --rc geninfo_unexecuted_blocks=1 00:17:52.130 00:17:52.130 ' 00:17:52.130 18:45:20 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.130 --rc genhtml_branch_coverage=1 00:17:52.130 --rc genhtml_function_coverage=1 00:17:52.130 --rc genhtml_legend=1 00:17:52.130 --rc geninfo_all_blocks=1 00:17:52.130 --rc geninfo_unexecuted_blocks=1 00:17:52.130 00:17:52.130 ' 00:17:52.130 18:45:20 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.130 --rc genhtml_branch_coverage=1 00:17:52.130 --rc genhtml_function_coverage=1 00:17:52.130 --rc genhtml_legend=1 00:17:52.130 --rc geninfo_all_blocks=1 00:17:52.130 --rc geninfo_unexecuted_blocks=1 00:17:52.130 00:17:52.130 ' 00:17:52.130 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2b30ccf6-f6d8-4ff4-85d2-d61da9ea3b67 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2b30ccf6-f6d8-4ff4-85d2-d61da9ea3b67 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.130 18:45:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.130 18:45:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.130 18:45:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.131 18:45:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.131 18:45:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.131 18:45:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:17:52.131 18:45:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.131 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.131 18:45:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:52.131 INFO: launching applications... 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:17:52.131 18:45:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59214 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:52.131 Waiting for target to run... 00:17:52.131 18:45:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59214 /var/tmp/spdk_tgt.sock 00:17:52.131 18:45:20 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59214 ']' 00:17:52.131 18:45:20 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:52.131 18:45:20 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:52.131 18:45:20 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:52.131 18:45:20 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.131 18:45:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:52.131 [2024-10-08 18:45:20.825057] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:52.131 [2024-10-08 18:45:20.825246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:17:52.697 [2024-10-08 18:45:21.282860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.955 [2024-10-08 18:45:21.594839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.330 18:45:22 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.330 00:17:54.330 18:45:22 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:17:54.330 INFO: shutting down applications... 00:17:54.330 18:45:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:17:54.330 18:45:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59214 ]] 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59214 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59214 00:17:54.330 18:45:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:54.588 18:45:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:54.588 18:45:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:54.588 18:45:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59214 00:17:54.588 18:45:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:55.154 18:45:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:55.154 18:45:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:55.154 18:45:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59214 00:17:55.154 18:45:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:55.719 18:45:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:55.719 18:45:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:55.719 18:45:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59214 00:17:55.719 18:45:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:55.977 18:45:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:55.977 18:45:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:55.977 18:45:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59214 00:17:55.977 18:45:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:56.543 18:45:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:56.543 18:45:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:56.543 18:45:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59214 00:17:56.543 18:45:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:57.108 18:45:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:57.108 18:45:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:57.108 18:45:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59214 00:17:57.108 18:45:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:57.674 18:45:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:57.674 18:45:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:57.674 18:45:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59214 00:17:57.674 18:45:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:57.674 18:45:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:17:57.674 18:45:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:57.674 SPDK target shutdown done 00:17:57.674 18:45:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:57.674 Success 00:17:57.674 18:45:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:17:57.674 00:17:57.674 real 0m5.760s 00:17:57.674 user 0m5.481s 00:17:57.674 sys 0m0.662s 00:17:57.674 18:45:26 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.674 18:45:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:57.674 ************************************ 00:17:57.674 END TEST json_config_extra_key 00:17:57.674 ************************************ 00:17:57.674 18:45:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:57.674 18:45:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:57.674 18:45:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.674 18:45:26 -- common/autotest_common.sh@10 -- # set +x 00:17:57.674 ************************************ 00:17:57.674 START TEST alias_rpc 00:17:57.674 ************************************ 00:17:57.674 18:45:26 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:57.674 * Looking for test storage... 00:17:57.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:17:57.674 18:45:26 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:57.674 18:45:26 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:17:57.674 18:45:26 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.932 18:45:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:57.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.932 --rc genhtml_branch_coverage=1 00:17:57.932 --rc genhtml_function_coverage=1 00:17:57.932 --rc genhtml_legend=1 00:17:57.932 --rc geninfo_all_blocks=1 00:17:57.932 --rc geninfo_unexecuted_blocks=1 00:17:57.932 00:17:57.932 ' 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:57.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.932 --rc genhtml_branch_coverage=1 00:17:57.932 --rc genhtml_function_coverage=1 00:17:57.932 --rc genhtml_legend=1 00:17:57.932 --rc geninfo_all_blocks=1 00:17:57.932 --rc geninfo_unexecuted_blocks=1 00:17:57.932 00:17:57.932 ' 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:57.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.932 --rc genhtml_branch_coverage=1 00:17:57.932 --rc genhtml_function_coverage=1 00:17:57.932 --rc genhtml_legend=1 00:17:57.932 --rc geninfo_all_blocks=1 00:17:57.932 --rc geninfo_unexecuted_blocks=1 00:17:57.932 00:17:57.932 ' 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:57.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.932 --rc genhtml_branch_coverage=1 00:17:57.932 --rc genhtml_function_coverage=1 00:17:57.932 --rc genhtml_legend=1 00:17:57.932 --rc geninfo_all_blocks=1 00:17:57.932 --rc geninfo_unexecuted_blocks=1 00:17:57.932 00:17:57.932 ' 00:17:57.932 18:45:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:57.932 18:45:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59337 00:17:57.932 18:45:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59337 00:17:57.932 18:45:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59337 ']' 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.932 18:45:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:57.932 [2024-10-08 18:45:26.609876] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:17:57.932 [2024-10-08 18:45:26.610041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59337 ] 00:17:58.191 [2024-10-08 18:45:26.782341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.448 [2024-10-08 18:45:27.061243] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.382 18:45:28 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.382 18:45:28 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:59.382 18:45:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:17:59.640 18:45:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59337 00:17:59.640 18:45:28 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59337 ']' 00:17:59.640 18:45:28 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59337 00:17:59.640 18:45:28 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:17:59.640 18:45:28 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:59.640 18:45:28 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59337 00:17:59.898 18:45:28 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:59.898 killing process with pid 59337 00:17:59.898 18:45:28 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:59.898 18:45:28 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59337' 00:17:59.898 18:45:28 alias_rpc -- common/autotest_common.sh@969 -- # kill 59337 00:17:59.898 18:45:28 alias_rpc -- common/autotest_common.sh@974 -- # wait 59337 00:18:03.182 00:18:03.182 real 0m5.071s 00:18:03.182 user 0m5.185s 00:18:03.182 sys 0m0.648s 00:18:03.182 18:45:31 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.182 18:45:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.182 ************************************ 00:18:03.182 END TEST alias_rpc 00:18:03.182 ************************************ 00:18:03.182 18:45:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:18:03.182 18:45:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:18:03.182 18:45:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:03.182 18:45:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.182 18:45:31 -- common/autotest_common.sh@10 -- # set +x 00:18:03.182 ************************************ 00:18:03.182 START TEST spdkcli_tcp 00:18:03.182 ************************************ 00:18:03.182 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:18:03.182 * Looking for test storage... 00:18:03.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:03.182 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:03.182 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:18:03.182 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:03.182 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.182 18:45:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:18:03.182 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.182 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:03.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.182 --rc genhtml_branch_coverage=1 00:18:03.182 --rc genhtml_function_coverage=1 00:18:03.182 --rc genhtml_legend=1 00:18:03.182 --rc geninfo_all_blocks=1 00:18:03.182 --rc geninfo_unexecuted_blocks=1 00:18:03.182 00:18:03.182 ' 00:18:03.182 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:03.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.182 --rc genhtml_branch_coverage=1 00:18:03.182 --rc genhtml_function_coverage=1 00:18:03.182 --rc genhtml_legend=1 00:18:03.182 --rc geninfo_all_blocks=1 00:18:03.183 --rc geninfo_unexecuted_blocks=1 00:18:03.183 00:18:03.183 ' 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:03.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.183 --rc genhtml_branch_coverage=1 00:18:03.183 --rc genhtml_function_coverage=1 00:18:03.183 --rc genhtml_legend=1 00:18:03.183 --rc geninfo_all_blocks=1 00:18:03.183 --rc geninfo_unexecuted_blocks=1 00:18:03.183 00:18:03.183 ' 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:03.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.183 --rc genhtml_branch_coverage=1 00:18:03.183 --rc genhtml_function_coverage=1 00:18:03.183 --rc genhtml_legend=1 00:18:03.183 --rc geninfo_all_blocks=1 00:18:03.183 --rc geninfo_unexecuted_blocks=1 00:18:03.183 00:18:03.183 ' 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59455 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:03.183 18:45:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59455 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59455 ']' 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.183 18:45:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:03.183 [2024-10-08 18:45:31.722053] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:03.183 [2024-10-08 18:45:31.722212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59455 ] 00:18:03.183 [2024-10-08 18:45:31.894614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:03.440 [2024-10-08 18:45:32.141286] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.440 [2024-10-08 18:45:32.141314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.815 18:45:33 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.815 18:45:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:18:04.815 18:45:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59478 00:18:04.815 18:45:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:18:04.815 18:45:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:18:04.815 [ 00:18:04.815 "bdev_malloc_delete", 00:18:04.815 "bdev_malloc_create", 00:18:04.815 "bdev_null_resize", 00:18:04.815 "bdev_null_delete", 00:18:04.815 "bdev_null_create", 00:18:04.815 "bdev_nvme_cuse_unregister", 00:18:04.815 "bdev_nvme_cuse_register", 00:18:04.815 "bdev_opal_new_user", 00:18:04.815 "bdev_opal_set_lock_state", 00:18:04.815 "bdev_opal_delete", 00:18:04.815 "bdev_opal_get_info", 00:18:04.815 "bdev_opal_create", 00:18:04.815 "bdev_nvme_opal_revert", 00:18:04.815 "bdev_nvme_opal_init", 00:18:04.815 "bdev_nvme_send_cmd", 00:18:04.815 "bdev_nvme_set_keys", 00:18:04.815 "bdev_nvme_get_path_iostat", 00:18:04.815 "bdev_nvme_get_mdns_discovery_info", 00:18:04.815 "bdev_nvme_stop_mdns_discovery", 00:18:04.815 "bdev_nvme_start_mdns_discovery", 00:18:04.815 "bdev_nvme_set_multipath_policy", 00:18:04.815 "bdev_nvme_set_preferred_path", 00:18:04.815 "bdev_nvme_get_io_paths", 00:18:04.815 "bdev_nvme_remove_error_injection", 00:18:04.815 "bdev_nvme_add_error_injection", 00:18:04.815 "bdev_nvme_get_discovery_info", 00:18:04.815 "bdev_nvme_stop_discovery", 00:18:04.815 "bdev_nvme_start_discovery", 00:18:04.815 "bdev_nvme_get_controller_health_info", 00:18:04.815 "bdev_nvme_disable_controller", 00:18:04.815 "bdev_nvme_enable_controller", 00:18:04.815 "bdev_nvme_reset_controller", 00:18:04.815 "bdev_nvme_get_transport_statistics", 00:18:04.815 "bdev_nvme_apply_firmware", 00:18:04.815 "bdev_nvme_detach_controller", 00:18:04.815 "bdev_nvme_get_controllers", 00:18:04.815 "bdev_nvme_attach_controller", 00:18:04.815 "bdev_nvme_set_hotplug", 00:18:04.815 "bdev_nvme_set_options", 00:18:04.815 "bdev_passthru_delete", 00:18:04.815 "bdev_passthru_create", 00:18:04.815 "bdev_lvol_set_parent_bdev", 00:18:04.815 "bdev_lvol_set_parent", 00:18:04.815 "bdev_lvol_check_shallow_copy", 00:18:04.815 "bdev_lvol_start_shallow_copy", 00:18:04.815 "bdev_lvol_grow_lvstore", 00:18:04.815 "bdev_lvol_get_lvols", 00:18:04.815 "bdev_lvol_get_lvstores", 00:18:04.815 "bdev_lvol_delete", 00:18:04.815 "bdev_lvol_set_read_only", 00:18:04.815 "bdev_lvol_resize", 00:18:04.815 "bdev_lvol_decouple_parent", 00:18:04.815 "bdev_lvol_inflate", 00:18:04.815 "bdev_lvol_rename", 00:18:04.815 "bdev_lvol_clone_bdev", 00:18:04.815 "bdev_lvol_clone", 00:18:04.815 "bdev_lvol_snapshot", 00:18:04.815 "bdev_lvol_create", 00:18:04.815 "bdev_lvol_delete_lvstore", 00:18:04.815 "bdev_lvol_rename_lvstore", 00:18:04.815 "bdev_lvol_create_lvstore", 00:18:04.815 "bdev_raid_set_options", 00:18:04.815 "bdev_raid_remove_base_bdev", 00:18:04.815 "bdev_raid_add_base_bdev", 00:18:04.815 "bdev_raid_delete", 00:18:04.815 "bdev_raid_create", 00:18:04.815 "bdev_raid_get_bdevs", 00:18:04.815 "bdev_error_inject_error", 00:18:04.815 "bdev_error_delete", 00:18:04.815 "bdev_error_create", 00:18:04.815 "bdev_split_delete", 00:18:04.815 "bdev_split_create", 00:18:04.815 "bdev_delay_delete", 00:18:04.815 "bdev_delay_create", 00:18:04.815 "bdev_delay_update_latency", 00:18:04.815 "bdev_zone_block_delete", 00:18:04.815 "bdev_zone_block_create", 00:18:04.815 "blobfs_create", 00:18:04.815 "blobfs_detect", 00:18:04.815 "blobfs_set_cache_size", 00:18:04.815 "bdev_xnvme_delete", 00:18:04.815 "bdev_xnvme_create", 00:18:04.815 "bdev_aio_delete", 00:18:04.815 "bdev_aio_rescan", 00:18:04.815 "bdev_aio_create", 00:18:04.815 "bdev_ftl_set_property", 00:18:04.815 "bdev_ftl_get_properties", 00:18:04.815 "bdev_ftl_get_stats", 00:18:04.815 "bdev_ftl_unmap", 00:18:04.815 "bdev_ftl_unload", 00:18:04.815 "bdev_ftl_delete", 00:18:04.815 "bdev_ftl_load", 00:18:04.815 "bdev_ftl_create", 00:18:04.815 "bdev_virtio_attach_controller", 00:18:04.815 "bdev_virtio_scsi_get_devices", 00:18:04.815 "bdev_virtio_detach_controller", 00:18:04.815 "bdev_virtio_blk_set_hotplug", 00:18:04.815 "bdev_iscsi_delete", 00:18:04.815 "bdev_iscsi_create", 00:18:04.815 "bdev_iscsi_set_options", 00:18:04.815 "accel_error_inject_error", 00:18:04.815 "ioat_scan_accel_module", 00:18:04.815 "dsa_scan_accel_module", 00:18:04.815 "iaa_scan_accel_module", 00:18:04.815 "keyring_file_remove_key", 00:18:04.815 "keyring_file_add_key", 00:18:04.815 "keyring_linux_set_options", 00:18:04.815 "fsdev_aio_delete", 00:18:04.815 "fsdev_aio_create", 00:18:04.815 "iscsi_get_histogram", 00:18:04.815 "iscsi_enable_histogram", 00:18:04.815 "iscsi_set_options", 00:18:04.816 "iscsi_get_auth_groups", 00:18:04.816 "iscsi_auth_group_remove_secret", 00:18:04.816 "iscsi_auth_group_add_secret", 00:18:04.816 "iscsi_delete_auth_group", 00:18:04.816 "iscsi_create_auth_group", 00:18:04.816 "iscsi_set_discovery_auth", 00:18:04.816 "iscsi_get_options", 00:18:04.816 "iscsi_target_node_request_logout", 00:18:04.816 "iscsi_target_node_set_redirect", 00:18:04.816 "iscsi_target_node_set_auth", 00:18:04.816 "iscsi_target_node_add_lun", 00:18:04.816 "iscsi_get_stats", 00:18:04.816 "iscsi_get_connections", 00:18:04.816 "iscsi_portal_group_set_auth", 00:18:04.816 "iscsi_start_portal_group", 00:18:04.816 "iscsi_delete_portal_group", 00:18:04.816 "iscsi_create_portal_group", 00:18:04.816 "iscsi_get_portal_groups", 00:18:04.816 "iscsi_delete_target_node", 00:18:04.816 "iscsi_target_node_remove_pg_ig_maps", 00:18:04.816 "iscsi_target_node_add_pg_ig_maps", 00:18:04.816 "iscsi_create_target_node", 00:18:04.816 "iscsi_get_target_nodes", 00:18:04.816 "iscsi_delete_initiator_group", 00:18:04.816 "iscsi_initiator_group_remove_initiators", 00:18:04.816 "iscsi_initiator_group_add_initiators", 00:18:04.816 "iscsi_create_initiator_group", 00:18:04.816 "iscsi_get_initiator_groups", 00:18:04.816 "nvmf_set_crdt", 00:18:04.816 "nvmf_set_config", 00:18:04.816 "nvmf_set_max_subsystems", 00:18:04.816 "nvmf_stop_mdns_prr", 00:18:04.816 "nvmf_publish_mdns_prr", 00:18:04.816 "nvmf_subsystem_get_listeners", 00:18:04.816 "nvmf_subsystem_get_qpairs", 00:18:04.816 "nvmf_subsystem_get_controllers", 00:18:04.816 "nvmf_get_stats", 00:18:04.816 "nvmf_get_transports", 00:18:04.816 "nvmf_create_transport", 00:18:04.816 "nvmf_get_targets", 00:18:04.816 "nvmf_delete_target", 00:18:04.816 "nvmf_create_target", 00:18:04.816 "nvmf_subsystem_allow_any_host", 00:18:04.816 "nvmf_subsystem_set_keys", 00:18:04.816 "nvmf_subsystem_remove_host", 00:18:04.816 "nvmf_subsystem_add_host", 00:18:04.816 "nvmf_ns_remove_host", 00:18:04.816 "nvmf_ns_add_host", 00:18:04.816 "nvmf_subsystem_remove_ns", 00:18:04.816 "nvmf_subsystem_set_ns_ana_group", 00:18:04.816 "nvmf_subsystem_add_ns", 00:18:04.816 "nvmf_subsystem_listener_set_ana_state", 00:18:04.816 "nvmf_discovery_get_referrals", 00:18:04.816 "nvmf_discovery_remove_referral", 00:18:04.816 "nvmf_discovery_add_referral", 00:18:04.816 "nvmf_subsystem_remove_listener", 00:18:04.816 "nvmf_subsystem_add_listener", 00:18:04.816 "nvmf_delete_subsystem", 00:18:04.816 "nvmf_create_subsystem", 00:18:04.816 "nvmf_get_subsystems", 00:18:04.816 "env_dpdk_get_mem_stats", 00:18:04.816 "nbd_get_disks", 00:18:04.816 "nbd_stop_disk", 00:18:04.816 "nbd_start_disk", 00:18:04.816 "ublk_recover_disk", 00:18:04.816 "ublk_get_disks", 00:18:04.816 "ublk_stop_disk", 00:18:04.816 "ublk_start_disk", 00:18:04.816 "ublk_destroy_target", 00:18:04.816 "ublk_create_target", 00:18:04.816 "virtio_blk_create_transport", 00:18:04.816 "virtio_blk_get_transports", 00:18:04.816 "vhost_controller_set_coalescing", 00:18:04.816 "vhost_get_controllers", 00:18:04.816 "vhost_delete_controller", 00:18:04.816 "vhost_create_blk_controller", 00:18:04.816 "vhost_scsi_controller_remove_target", 00:18:04.816 "vhost_scsi_controller_add_target", 00:18:04.816 "vhost_start_scsi_controller", 00:18:04.816 "vhost_create_scsi_controller", 00:18:04.816 "thread_set_cpumask", 00:18:04.816 "scheduler_set_options", 00:18:04.816 "framework_get_governor", 00:18:04.816 "framework_get_scheduler", 00:18:04.816 "framework_set_scheduler", 00:18:04.816 "framework_get_reactors", 00:18:04.816 "thread_get_io_channels", 00:18:04.816 "thread_get_pollers", 00:18:04.816 "thread_get_stats", 00:18:04.816 "framework_monitor_context_switch", 00:18:04.816 "spdk_kill_instance", 00:18:04.816 "log_enable_timestamps", 00:18:04.816 "log_get_flags", 00:18:04.816 "log_clear_flag", 00:18:04.816 "log_set_flag", 00:18:04.816 "log_get_level", 00:18:04.816 "log_set_level", 00:18:04.816 "log_get_print_level", 00:18:04.816 "log_set_print_level", 00:18:04.816 "framework_enable_cpumask_locks", 00:18:04.816 "framework_disable_cpumask_locks", 00:18:04.816 "framework_wait_init", 00:18:04.816 "framework_start_init", 00:18:04.816 "scsi_get_devices", 00:18:04.816 "bdev_get_histogram", 00:18:04.816 "bdev_enable_histogram", 00:18:04.816 "bdev_set_qos_limit", 00:18:04.816 "bdev_set_qd_sampling_period", 00:18:04.816 "bdev_get_bdevs", 00:18:04.816 "bdev_reset_iostat", 00:18:04.816 "bdev_get_iostat", 00:18:04.816 "bdev_examine", 00:18:04.816 "bdev_wait_for_examine", 00:18:04.816 "bdev_set_options", 00:18:04.816 "accel_get_stats", 00:18:04.816 "accel_set_options", 00:18:04.816 "accel_set_driver", 00:18:04.816 "accel_crypto_key_destroy", 00:18:04.816 "accel_crypto_keys_get", 00:18:04.816 "accel_crypto_key_create", 00:18:04.816 "accel_assign_opc", 00:18:04.816 "accel_get_module_info", 00:18:04.816 "accel_get_opc_assignments", 00:18:04.816 "vmd_rescan", 00:18:04.816 "vmd_remove_device", 00:18:04.816 "vmd_enable", 00:18:04.816 "sock_get_default_impl", 00:18:04.816 "sock_set_default_impl", 00:18:04.816 "sock_impl_set_options", 00:18:04.816 "sock_impl_get_options", 00:18:04.816 "iobuf_get_stats", 00:18:04.816 "iobuf_set_options", 00:18:04.816 "keyring_get_keys", 00:18:04.816 "framework_get_pci_devices", 00:18:04.816 "framework_get_config", 00:18:04.816 "framework_get_subsystems", 00:18:04.816 "fsdev_set_opts", 00:18:04.816 "fsdev_get_opts", 00:18:04.816 "trace_get_info", 00:18:04.816 "trace_get_tpoint_group_mask", 00:18:04.816 "trace_disable_tpoint_group", 00:18:04.816 "trace_enable_tpoint_group", 00:18:04.816 "trace_clear_tpoint_mask", 00:18:04.816 "trace_set_tpoint_mask", 00:18:04.816 "notify_get_notifications", 00:18:04.816 "notify_get_types", 00:18:04.816 "spdk_get_version", 00:18:04.816 "rpc_get_methods" 00:18:04.816 ] 00:18:04.816 18:45:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:04.816 18:45:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:04.816 18:45:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59455 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59455 ']' 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59455 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59455 00:18:04.816 killing process with pid 59455 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59455' 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59455 00:18:04.816 18:45:33 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59455 00:18:08.146 ************************************ 00:18:08.146 END TEST spdkcli_tcp 00:18:08.146 ************************************ 00:18:08.146 00:18:08.146 real 0m5.117s 00:18:08.146 user 0m9.171s 00:18:08.146 sys 0m0.702s 00:18:08.146 18:45:36 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:08.146 18:45:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.146 18:45:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:08.146 18:45:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:08.146 18:45:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:08.146 18:45:36 -- common/autotest_common.sh@10 -- # set +x 00:18:08.146 ************************************ 00:18:08.146 START TEST dpdk_mem_utility 00:18:08.146 ************************************ 00:18:08.146 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:18:08.146 * Looking for test storage... 00:18:08.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:18:08.146 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:08.146 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:18:08.146 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:08.146 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.146 18:45:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:18:08.146 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.146 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:08.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.146 --rc genhtml_branch_coverage=1 00:18:08.146 --rc genhtml_function_coverage=1 00:18:08.146 --rc genhtml_legend=1 00:18:08.146 --rc geninfo_all_blocks=1 00:18:08.146 --rc geninfo_unexecuted_blocks=1 00:18:08.146 00:18:08.146 ' 00:18:08.146 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:08.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.146 --rc genhtml_branch_coverage=1 00:18:08.146 --rc genhtml_function_coverage=1 00:18:08.146 --rc genhtml_legend=1 00:18:08.146 --rc geninfo_all_blocks=1 00:18:08.147 --rc geninfo_unexecuted_blocks=1 00:18:08.147 00:18:08.147 ' 00:18:08.147 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:08.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.147 --rc genhtml_branch_coverage=1 00:18:08.147 --rc genhtml_function_coverage=1 00:18:08.147 --rc genhtml_legend=1 00:18:08.147 --rc geninfo_all_blocks=1 00:18:08.147 --rc geninfo_unexecuted_blocks=1 00:18:08.147 00:18:08.147 ' 00:18:08.147 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:08.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.147 --rc genhtml_branch_coverage=1 00:18:08.147 --rc genhtml_function_coverage=1 00:18:08.147 --rc genhtml_legend=1 00:18:08.147 --rc geninfo_all_blocks=1 00:18:08.147 --rc geninfo_unexecuted_blocks=1 00:18:08.147 00:18:08.147 ' 00:18:08.147 18:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:08.147 18:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59588 00:18:08.147 18:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59588 00:18:08.147 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59588 ']' 00:18:08.147 18:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.147 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.147 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.147 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.147 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.147 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:08.405 [2024-10-08 18:45:36.916604] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:08.405 [2024-10-08 18:45:36.917240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59588 ] 00:18:08.405 [2024-10-08 18:45:37.089671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.751 [2024-10-08 18:45:37.341063] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.693 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:09.693 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:18:09.693 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:18:09.693 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:18:09.693 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.693 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:09.693 { 00:18:09.693 "filename": "/tmp/spdk_mem_dump.txt" 00:18:09.693 } 00:18:09.693 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.693 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:18:09.693 DPDK memory size 866.000000 MiB in 1 heap(s) 00:18:09.693 1 heaps totaling size 866.000000 MiB 00:18:09.693 size: 866.000000 MiB heap id: 0 00:18:09.693 end heaps---------- 00:18:09.693 9 mempools totaling size 642.649841 MiB 00:18:09.693 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:18:09.693 size: 158.602051 MiB name: PDU_data_out_Pool 00:18:09.693 size: 92.545471 MiB name: bdev_io_59588 00:18:09.693 size: 51.011292 MiB name: evtpool_59588 00:18:09.693 size: 50.003479 MiB name: msgpool_59588 00:18:09.693 size: 36.509338 MiB name: fsdev_io_59588 00:18:09.693 size: 21.763794 MiB name: PDU_Pool 00:18:09.693 size: 19.513306 MiB name: SCSI_TASK_Pool 00:18:09.693 size: 0.026123 MiB name: Session_Pool 00:18:09.693 end mempools------- 00:18:09.693 6 memzones totaling size 4.142822 MiB 00:18:09.693 size: 1.000366 MiB name: RG_ring_0_59588 00:18:09.693 size: 1.000366 MiB name: RG_ring_1_59588 00:18:09.693 size: 1.000366 MiB name: RG_ring_4_59588 00:18:09.693 size: 1.000366 MiB name: RG_ring_5_59588 00:18:09.693 size: 0.125366 MiB name: RG_ring_2_59588 00:18:09.693 size: 0.015991 MiB name: RG_ring_3_59588 00:18:09.693 end memzones------- 00:18:09.953 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:18:09.953 heap id: 0 total size: 866.000000 MiB number of busy elements: 314 number of free elements: 19 00:18:09.953 list of free elements. size: 19.913818 MiB 00:18:09.953 element at address: 0x200000400000 with size: 1.999451 MiB 00:18:09.953 element at address: 0x200000800000 with size: 1.996887 MiB 00:18:09.953 element at address: 0x200009600000 with size: 1.995972 MiB 00:18:09.953 element at address: 0x20000d800000 with size: 1.995972 MiB 00:18:09.953 element at address: 0x200007000000 with size: 1.991028 MiB 00:18:09.953 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:18:09.953 element at address: 0x20001c300040 with size: 0.999939 MiB 00:18:09.953 element at address: 0x20001c400000 with size: 0.999084 MiB 00:18:09.953 element at address: 0x200035000000 with size: 0.994324 MiB 00:18:09.953 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:18:09.953 element at address: 0x20001c700040 with size: 0.936401 MiB 00:18:09.953 element at address: 0x200000200000 with size: 0.831909 MiB 00:18:09.953 element at address: 0x20001de00000 with size: 0.562195 MiB 00:18:09.953 element at address: 0x200003e00000 with size: 0.490173 MiB 00:18:09.953 element at address: 0x20001c000000 with size: 0.488464 MiB 00:18:09.953 element at address: 0x20001c800000 with size: 0.485413 MiB 00:18:09.953 element at address: 0x200015e00000 with size: 0.443481 MiB 00:18:09.953 element at address: 0x20002b200000 with size: 0.390442 MiB 00:18:09.953 element at address: 0x200003a00000 with size: 0.353088 MiB 00:18:09.953 list of standard malloc elements. size: 199.287476 MiB 00:18:09.953 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:18:09.953 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:18:09.953 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:18:09.953 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:18:09.953 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:18:09.953 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:18:09.953 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:18:09.953 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:18:09.953 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:18:09.953 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:18:09.953 element at address: 0x200015dff040 with size: 0.000305 MiB 00:18:09.953 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:18:09.953 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:18:09.953 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:18:09.953 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003aff800 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003affa80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7d7c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003efef00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200003eff000 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff180 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff280 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff380 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff480 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff580 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff680 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff780 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff880 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dff980 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e71880 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e71980 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e72080 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015e72180 with size: 0.000244 MiB 00:18:09.954 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b264040 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:18:09.954 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:18:09.954 list of memzone associated elements. size: 646.798706 MiB 00:18:09.954 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:18:09.954 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:18:09.954 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:18:09.954 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:18:09.955 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:18:09.955 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59588_0 00:18:09.955 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:18:09.955 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59588_0 00:18:09.955 element at address: 0x200003fff340 with size: 48.003113 MiB 00:18:09.955 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59588_0 00:18:09.955 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:18:09.955 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59588_0 00:18:09.955 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:18:09.955 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:18:09.955 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:18:09.955 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:18:09.955 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:18:09.955 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59588 00:18:09.955 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:18:09.955 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59588 00:18:09.955 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:18:09.955 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59588 00:18:09.955 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:18:09.955 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:18:09.955 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:18:09.955 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:18:09.955 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:18:09.955 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:18:09.955 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:18:09.955 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:18:09.955 element at address: 0x200003eff100 with size: 1.000549 MiB 00:18:09.955 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59588 00:18:09.955 element at address: 0x200003affb80 with size: 1.000549 MiB 00:18:09.955 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59588 00:18:09.955 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:18:09.955 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59588 00:18:09.955 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:18:09.955 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59588 00:18:09.955 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:18:09.955 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59588 00:18:09.955 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:18:09.955 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59588 00:18:09.955 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:18:09.955 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:18:09.955 element at address: 0x200015e72280 with size: 0.500549 MiB 00:18:09.955 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:18:09.955 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:18:09.955 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:18:09.955 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:18:09.955 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59588 00:18:09.955 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:18:09.955 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:18:09.955 element at address: 0x20002b264140 with size: 0.023804 MiB 00:18:09.955 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:18:09.955 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:18:09.955 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59588 00:18:09.955 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:18:09.955 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:18:09.955 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:18:09.955 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59588 00:18:09.955 element at address: 0x200003aff900 with size: 0.000366 MiB 00:18:09.955 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59588 00:18:09.955 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:18:09.955 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59588 00:18:09.955 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:18:09.955 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:18:09.955 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:18:09.955 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59588 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59588 ']' 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59588 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59588 00:18:09.955 killing process with pid 59588 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59588' 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59588 00:18:09.955 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59588 00:18:13.238 00:18:13.238 real 0m5.270s 00:18:13.238 user 0m5.246s 00:18:13.238 sys 0m0.663s 00:18:13.238 18:45:41 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:13.238 ************************************ 00:18:13.238 END TEST dpdk_mem_utility 00:18:13.238 ************************************ 00:18:13.238 18:45:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:18:13.238 18:45:41 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:13.238 18:45:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:13.238 18:45:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:13.238 18:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:13.238 ************************************ 00:18:13.238 START TEST event 00:18:13.238 ************************************ 00:18:13.238 18:45:41 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:18:13.497 * Looking for test storage... 00:18:13.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1681 -- # lcov --version 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:13.497 18:45:42 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.497 18:45:42 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.497 18:45:42 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.497 18:45:42 event -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.497 18:45:42 event -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.497 18:45:42 event -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.497 18:45:42 event -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.497 18:45:42 event -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.497 18:45:42 event -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.497 18:45:42 event -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.497 18:45:42 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.497 18:45:42 event -- scripts/common.sh@344 -- # case "$op" in 00:18:13.497 18:45:42 event -- scripts/common.sh@345 -- # : 1 00:18:13.497 18:45:42 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.497 18:45:42 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.497 18:45:42 event -- scripts/common.sh@365 -- # decimal 1 00:18:13.497 18:45:42 event -- scripts/common.sh@353 -- # local d=1 00:18:13.497 18:45:42 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.497 18:45:42 event -- scripts/common.sh@355 -- # echo 1 00:18:13.497 18:45:42 event -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.497 18:45:42 event -- scripts/common.sh@366 -- # decimal 2 00:18:13.497 18:45:42 event -- scripts/common.sh@353 -- # local d=2 00:18:13.497 18:45:42 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.497 18:45:42 event -- scripts/common.sh@355 -- # echo 2 00:18:13.497 18:45:42 event -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.497 18:45:42 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.497 18:45:42 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.497 18:45:42 event -- scripts/common.sh@368 -- # return 0 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:13.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.497 --rc genhtml_branch_coverage=1 00:18:13.497 --rc genhtml_function_coverage=1 00:18:13.497 --rc genhtml_legend=1 00:18:13.497 --rc geninfo_all_blocks=1 00:18:13.497 --rc geninfo_unexecuted_blocks=1 00:18:13.497 00:18:13.497 ' 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:13.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.497 --rc genhtml_branch_coverage=1 00:18:13.497 --rc genhtml_function_coverage=1 00:18:13.497 --rc genhtml_legend=1 00:18:13.497 --rc geninfo_all_blocks=1 00:18:13.497 --rc geninfo_unexecuted_blocks=1 00:18:13.497 00:18:13.497 ' 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:13.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.497 --rc genhtml_branch_coverage=1 00:18:13.497 --rc genhtml_function_coverage=1 00:18:13.497 --rc genhtml_legend=1 00:18:13.497 --rc geninfo_all_blocks=1 00:18:13.497 --rc geninfo_unexecuted_blocks=1 00:18:13.497 00:18:13.497 ' 00:18:13.497 18:45:42 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:13.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.497 --rc genhtml_branch_coverage=1 00:18:13.497 --rc genhtml_function_coverage=1 00:18:13.497 --rc genhtml_legend=1 00:18:13.497 --rc geninfo_all_blocks=1 00:18:13.497 --rc geninfo_unexecuted_blocks=1 00:18:13.497 00:18:13.497 ' 00:18:13.498 18:45:42 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:13.498 18:45:42 event -- bdev/nbd_common.sh@6 -- # set -e 00:18:13.498 18:45:42 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:13.498 18:45:42 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:18:13.498 18:45:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:13.498 18:45:42 event -- common/autotest_common.sh@10 -- # set +x 00:18:13.498 ************************************ 00:18:13.498 START TEST event_perf 00:18:13.498 ************************************ 00:18:13.498 18:45:42 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:18:13.498 Running I/O for 1 seconds...[2024-10-08 18:45:42.156378] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:13.498 [2024-10-08 18:45:42.156512] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59713 ] 00:18:13.758 [2024-10-08 18:45:42.324117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.016 [2024-10-08 18:45:42.585400] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.016 [2024-10-08 18:45:42.585571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.016 [2024-10-08 18:45:42.585602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.017 Running I/O for 1 seconds...[2024-10-08 18:45:42.585611] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.391 00:18:15.391 lcore 0: 85004 00:18:15.391 lcore 1: 85007 00:18:15.391 lcore 2: 85001 00:18:15.391 lcore 3: 85002 00:18:15.391 done. 00:18:15.391 00:18:15.391 real 0m1.960s 00:18:15.391 user 0m4.679s 00:18:15.391 sys 0m0.152s 00:18:15.391 18:45:44 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:15.391 ************************************ 00:18:15.391 END TEST event_perf 00:18:15.391 ************************************ 00:18:15.391 18:45:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:18:15.391 18:45:44 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:15.391 18:45:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:15.391 18:45:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:15.391 18:45:44 event -- common/autotest_common.sh@10 -- # set +x 00:18:15.391 ************************************ 00:18:15.391 START TEST event_reactor 00:18:15.391 ************************************ 00:18:15.391 18:45:44 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:18:15.650 [2024-10-08 18:45:44.174104] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:15.650 [2024-10-08 18:45:44.174535] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59758 ] 00:18:15.650 [2024-10-08 18:45:44.372210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.217 [2024-10-08 18:45:44.717994] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.592 test_start 00:18:17.592 oneshot 00:18:17.592 tick 100 00:18:17.592 tick 100 00:18:17.592 tick 250 00:18:17.592 tick 100 00:18:17.592 tick 100 00:18:17.592 tick 100 00:18:17.592 tick 250 00:18:17.592 tick 500 00:18:17.592 tick 100 00:18:17.592 tick 100 00:18:17.592 tick 250 00:18:17.592 tick 100 00:18:17.592 tick 100 00:18:17.592 test_end 00:18:17.592 00:18:17.592 real 0m2.061s 00:18:17.592 user 0m1.800s 00:18:17.592 sys 0m0.146s 00:18:17.592 ************************************ 00:18:17.592 END TEST event_reactor 00:18:17.592 ************************************ 00:18:17.592 18:45:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:17.592 18:45:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:18:17.592 18:45:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:17.592 18:45:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:17.592 18:45:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:17.592 18:45:46 event -- common/autotest_common.sh@10 -- # set +x 00:18:17.592 ************************************ 00:18:17.592 START TEST event_reactor_perf 00:18:17.592 ************************************ 00:18:17.592 18:45:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:18:17.592 [2024-10-08 18:45:46.306894] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:17.593 [2024-10-08 18:45:46.307132] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59800 ] 00:18:17.851 [2024-10-08 18:45:46.497751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.111 [2024-10-08 18:45:46.807009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.014 test_start 00:18:20.014 test_end 00:18:20.014 Performance: 307318 events per second 00:18:20.014 00:18:20.014 real 0m2.009s 00:18:20.014 user 0m1.749s 00:18:20.014 sys 0m0.147s 00:18:20.014 ************************************ 00:18:20.014 END TEST event_reactor_perf 00:18:20.014 ************************************ 00:18:20.014 18:45:48 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.014 18:45:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:18:20.014 18:45:48 event -- event/event.sh@49 -- # uname -s 00:18:20.014 18:45:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:18:20.014 18:45:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:20.014 18:45:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:20.014 18:45:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.014 18:45:48 event -- common/autotest_common.sh@10 -- # set +x 00:18:20.014 ************************************ 00:18:20.014 START TEST event_scheduler 00:18:20.014 ************************************ 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:18:20.014 * Looking for test storage... 00:18:20.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.014 18:45:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:20.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.014 --rc genhtml_branch_coverage=1 00:18:20.014 --rc genhtml_function_coverage=1 00:18:20.014 --rc genhtml_legend=1 00:18:20.014 --rc geninfo_all_blocks=1 00:18:20.014 --rc geninfo_unexecuted_blocks=1 00:18:20.014 00:18:20.014 ' 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:20.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.014 --rc genhtml_branch_coverage=1 00:18:20.014 --rc genhtml_function_coverage=1 00:18:20.014 --rc genhtml_legend=1 00:18:20.014 --rc geninfo_all_blocks=1 00:18:20.014 --rc geninfo_unexecuted_blocks=1 00:18:20.014 00:18:20.014 ' 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:20.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.014 --rc genhtml_branch_coverage=1 00:18:20.014 --rc genhtml_function_coverage=1 00:18:20.014 --rc genhtml_legend=1 00:18:20.014 --rc geninfo_all_blocks=1 00:18:20.014 --rc geninfo_unexecuted_blocks=1 00:18:20.014 00:18:20.014 ' 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:20.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.014 --rc genhtml_branch_coverage=1 00:18:20.014 --rc genhtml_function_coverage=1 00:18:20.014 --rc genhtml_legend=1 00:18:20.014 --rc geninfo_all_blocks=1 00:18:20.014 --rc geninfo_unexecuted_blocks=1 00:18:20.014 00:18:20.014 ' 00:18:20.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.014 18:45:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:18:20.014 18:45:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59876 00:18:20.014 18:45:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:18:20.014 18:45:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:18:20.014 18:45:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59876 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59876 ']' 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.014 18:45:48 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.015 18:45:48 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.015 18:45:48 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.015 18:45:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:20.015 [2024-10-08 18:45:48.695865] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:20.015 [2024-10-08 18:45:48.696326] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59876 ] 00:18:20.295 [2024-10-08 18:45:48.875528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.553 [2024-10-08 18:45:49.133879] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.553 [2024-10-08 18:45:49.133936] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.553 [2024-10-08 18:45:49.134057] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.553 [2024-10-08 18:45:49.134085] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.119 18:45:49 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.119 18:45:49 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:18:21.119 18:45:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:18:21.119 18:45:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.119 18:45:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:21.119 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:21.119 POWER: Cannot set governor of lcore 0 to userspace 00:18:21.119 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:21.119 POWER: Cannot set governor of lcore 0 to performance 00:18:21.119 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:21.119 POWER: Cannot set governor of lcore 0 to userspace 00:18:21.119 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:18:21.119 POWER: Cannot set governor of lcore 0 to userspace 00:18:21.119 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:18:21.119 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:18:21.119 POWER: Unable to set Power Management Environment for lcore 0 00:18:21.119 [2024-10-08 18:45:49.721070] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:18:21.119 [2024-10-08 18:45:49.721101] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:18:21.119 [2024-10-08 18:45:49.721123] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:18:21.119 [2024-10-08 18:45:49.721157] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:18:21.119 [2024-10-08 18:45:49.721173] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:18:21.119 [2024-10-08 18:45:49.721191] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:18:21.119 18:45:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.119 18:45:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:18:21.119 18:45:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.119 18:45:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:21.377 [2024-10-08 18:45:50.111587] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:18:21.377 18:45:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.377 18:45:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:18:21.377 18:45:50 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:21.377 18:45:50 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.377 18:45:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:21.377 ************************************ 00:18:21.377 START TEST scheduler_create_thread 00:18:21.377 ************************************ 00:18:21.377 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:18:21.377 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:18:21.377 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.377 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:21.636 2 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:21.636 3 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:21.636 4 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:21.636 5 00:18:21.636 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:21.637 6 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:21.637 7 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:21.637 8 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.637 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:21.894 9 00:18:21.894 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.894 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:18:21.894 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.894 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:23.268 10 00:18:23.268 18:45:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.268 18:45:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:18:23.268 18:45:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.268 18:45:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:24.203 18:45:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.203 18:45:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:18:24.203 18:45:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:18:24.203 18:45:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.203 18:45:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:24.769 18:45:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.769 18:45:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:18:24.769 18:45:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.769 18:45:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:25.334 18:45:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.334 18:45:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:18:25.334 18:45:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:18:25.334 18:45:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.334 18:45:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:25.901 ************************************ 00:18:25.901 END TEST scheduler_create_thread 00:18:25.901 ************************************ 00:18:25.901 18:45:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.901 00:18:25.901 real 0m4.466s 00:18:25.901 user 0m0.019s 00:18:25.901 sys 0m0.006s 00:18:25.901 18:45:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:25.901 18:45:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:18:25.901 18:45:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:25.901 18:45:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59876 00:18:25.901 18:45:54 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59876 ']' 00:18:25.901 18:45:54 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59876 00:18:25.901 18:45:54 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:18:25.901 18:45:54 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.901 18:45:54 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59876 00:18:26.190 killing process with pid 59876 00:18:26.190 18:45:54 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:26.190 18:45:54 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:26.190 18:45:54 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59876' 00:18:26.190 18:45:54 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59876 00:18:26.190 18:45:54 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59876 00:18:26.190 [2024-10-08 18:45:54.872428] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:18:28.091 ************************************ 00:18:28.091 END TEST event_scheduler 00:18:28.091 ************************************ 00:18:28.091 00:18:28.091 real 0m8.134s 00:18:28.091 user 0m18.312s 00:18:28.091 sys 0m0.603s 00:18:28.091 18:45:56 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.091 18:45:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:18:28.091 18:45:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:18:28.091 18:45:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:18:28.091 18:45:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:28.091 18:45:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:28.091 18:45:56 event -- common/autotest_common.sh@10 -- # set +x 00:18:28.091 ************************************ 00:18:28.091 START TEST app_repeat 00:18:28.091 ************************************ 00:18:28.091 18:45:56 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:18:28.092 Process app_repeat pid: 60017 00:18:28.092 spdk_app_start Round 0 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60017 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60017' 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:18:28.092 18:45:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60017 /var/tmp/spdk-nbd.sock 00:18:28.092 18:45:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60017 ']' 00:18:28.092 18:45:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:28.092 18:45:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:28.092 18:45:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:28.092 18:45:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.092 18:45:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:28.092 [2024-10-08 18:45:56.575238] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:28.092 [2024-10-08 18:45:56.575378] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60017 ] 00:18:28.092 [2024-10-08 18:45:56.772646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:28.350 [2024-10-08 18:45:57.046863] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.350 [2024-10-08 18:45:57.046894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.916 18:45:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.916 18:45:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:18:28.916 18:45:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:29.174 Malloc0 00:18:29.174 18:45:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:29.739 Malloc1 00:18:29.739 18:45:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:29.739 18:45:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:29.739 /dev/nbd0 00:18:29.998 18:45:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:29.998 18:45:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:29.998 1+0 records in 00:18:29.998 1+0 records out 00:18:29.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282204 s, 14.5 MB/s 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:29.998 18:45:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:29.998 18:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.998 18:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:29.998 18:45:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:30.257 /dev/nbd1 00:18:30.257 18:45:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:30.257 18:45:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:30.257 1+0 records in 00:18:30.257 1+0 records out 00:18:30.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352851 s, 11.6 MB/s 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:30.257 18:45:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:30.257 18:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:30.257 18:45:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:30.257 18:45:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:30.257 18:45:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:30.257 18:45:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:30.516 { 00:18:30.516 "nbd_device": "/dev/nbd0", 00:18:30.516 "bdev_name": "Malloc0" 00:18:30.516 }, 00:18:30.516 { 00:18:30.516 "nbd_device": "/dev/nbd1", 00:18:30.516 "bdev_name": "Malloc1" 00:18:30.516 } 00:18:30.516 ]' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:30.516 { 00:18:30.516 "nbd_device": "/dev/nbd0", 00:18:30.516 "bdev_name": "Malloc0" 00:18:30.516 }, 00:18:30.516 { 00:18:30.516 "nbd_device": "/dev/nbd1", 00:18:30.516 "bdev_name": "Malloc1" 00:18:30.516 } 00:18:30.516 ]' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:30.516 /dev/nbd1' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:30.516 /dev/nbd1' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:30.516 256+0 records in 00:18:30.516 256+0 records out 00:18:30.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054034 s, 194 MB/s 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:30.516 256+0 records in 00:18:30.516 256+0 records out 00:18:30.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328021 s, 32.0 MB/s 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:30.516 256+0 records in 00:18:30.516 256+0 records out 00:18:30.516 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317005 s, 33.1 MB/s 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.516 18:45:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.775 18:45:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:31.034 18:45:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:31.301 18:45:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:31.301 18:45:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:31.906 18:46:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:33.875 [2024-10-08 18:46:02.104391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:33.875 [2024-10-08 18:46:02.342319] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.875 [2024-10-08 18:46:02.342327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.875 [2024-10-08 18:46:02.562308] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:33.875 [2024-10-08 18:46:02.562436] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:34.832 18:46:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:34.832 spdk_app_start Round 1 00:18:34.832 18:46:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:18:34.832 18:46:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60017 /var/tmp/spdk-nbd.sock 00:18:34.832 18:46:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60017 ']' 00:18:34.832 18:46:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:34.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:34.832 18:46:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.832 18:46:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:34.832 18:46:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.832 18:46:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:35.398 18:46:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.398 18:46:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:18:35.398 18:46:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:35.656 Malloc0 00:18:35.656 18:46:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:35.913 Malloc1 00:18:35.913 18:46:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:35.913 18:46:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:36.173 /dev/nbd0 00:18:36.173 18:46:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:36.173 18:46:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:36.173 1+0 records in 00:18:36.173 1+0 records out 00:18:36.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00195022 s, 2.1 MB/s 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:36.173 18:46:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:36.173 18:46:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.173 18:46:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.173 18:46:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:36.779 /dev/nbd1 00:18:36.779 18:46:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:36.779 18:46:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:36.779 1+0 records in 00:18:36.779 1+0 records out 00:18:36.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376131 s, 10.9 MB/s 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:36.779 18:46:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:36.779 18:46:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.779 18:46:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.779 18:46:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:36.779 18:46:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:36.779 18:46:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:37.039 { 00:18:37.039 "nbd_device": "/dev/nbd0", 00:18:37.039 "bdev_name": "Malloc0" 00:18:37.039 }, 00:18:37.039 { 00:18:37.039 "nbd_device": "/dev/nbd1", 00:18:37.039 "bdev_name": "Malloc1" 00:18:37.039 } 00:18:37.039 ]' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:37.039 { 00:18:37.039 "nbd_device": "/dev/nbd0", 00:18:37.039 "bdev_name": "Malloc0" 00:18:37.039 }, 00:18:37.039 { 00:18:37.039 "nbd_device": "/dev/nbd1", 00:18:37.039 "bdev_name": "Malloc1" 00:18:37.039 } 00:18:37.039 ]' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:37.039 /dev/nbd1' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:37.039 /dev/nbd1' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:37.039 256+0 records in 00:18:37.039 256+0 records out 00:18:37.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00885382 s, 118 MB/s 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:37.039 256+0 records in 00:18:37.039 256+0 records out 00:18:37.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308636 s, 34.0 MB/s 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:37.039 256+0 records in 00:18:37.039 256+0 records out 00:18:37.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0409205 s, 25.6 MB/s 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:37.039 18:46:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:37.298 18:46:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.558 18:46:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:37.816 18:46:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:37.816 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:37.816 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:38.076 18:46:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:38.076 18:46:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:38.664 18:46:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:40.070 [2024-10-08 18:46:08.585922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:40.070 [2024-10-08 18:46:08.815867] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.070 [2024-10-08 18:46:08.815884] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.329 [2024-10-08 18:46:09.032784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:40.329 [2024-10-08 18:46:09.032874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:41.703 18:46:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:18:41.703 spdk_app_start Round 2 00:18:41.703 18:46:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:18:41.703 18:46:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60017 /var/tmp/spdk-nbd.sock 00:18:41.703 18:46:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60017 ']' 00:18:41.703 18:46:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:41.703 18:46:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:41.703 18:46:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:41.703 18:46:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.703 18:46:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:41.703 18:46:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.703 18:46:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:18:41.703 18:46:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:42.269 Malloc0 00:18:42.269 18:46:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:18:42.529 Malloc1 00:18:42.529 18:46:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.529 18:46:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:18:42.788 /dev/nbd0 00:18:42.788 18:46:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:42.788 18:46:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:42.788 1+0 records in 00:18:42.788 1+0 records out 00:18:42.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516587 s, 7.9 MB/s 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:42.788 18:46:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:42.788 18:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.788 18:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:42.788 18:46:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:18:43.047 /dev/nbd1 00:18:43.047 18:46:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:43.047 18:46:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:18:43.047 1+0 records in 00:18:43.047 1+0 records out 00:18:43.047 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381483 s, 10.7 MB/s 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:43.047 18:46:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:18:43.047 18:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:43.047 18:46:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:43.047 18:46:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:43.047 18:46:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:43.047 18:46:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:43.305 { 00:18:43.305 "nbd_device": "/dev/nbd0", 00:18:43.305 "bdev_name": "Malloc0" 00:18:43.305 }, 00:18:43.305 { 00:18:43.305 "nbd_device": "/dev/nbd1", 00:18:43.305 "bdev_name": "Malloc1" 00:18:43.305 } 00:18:43.305 ]' 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:43.305 { 00:18:43.305 "nbd_device": "/dev/nbd0", 00:18:43.305 "bdev_name": "Malloc0" 00:18:43.305 }, 00:18:43.305 { 00:18:43.305 "nbd_device": "/dev/nbd1", 00:18:43.305 "bdev_name": "Malloc1" 00:18:43.305 } 00:18:43.305 ]' 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:43.305 /dev/nbd1' 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:43.305 /dev/nbd1' 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:43.305 18:46:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:43.306 18:46:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:43.306 18:46:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:18:43.306 256+0 records in 00:18:43.306 256+0 records out 00:18:43.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00875466 s, 120 MB/s 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:43.565 256+0 records in 00:18:43.565 256+0 records out 00:18:43.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0370943 s, 28.3 MB/s 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:43.565 256+0 records in 00:18:43.565 256+0 records out 00:18:43.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0328203 s, 31.9 MB/s 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:43.565 18:46:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:43.823 18:46:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:43.823 18:46:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:43.824 18:46:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:43.824 18:46:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:43.824 18:46:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:43.824 18:46:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:43.824 18:46:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:43.824 18:46:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:43.824 18:46:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:43.824 18:46:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.082 18:46:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:44.649 18:46:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:18:44.649 18:46:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:18:45.216 18:46:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:18:46.591 [2024-10-08 18:46:15.239238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:46.850 [2024-10-08 18:46:15.481586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.850 [2024-10-08 18:46:15.481594] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.108 [2024-10-08 18:46:15.707027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:18:47.108 [2024-10-08 18:46:15.707126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:48.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:48.045 18:46:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60017 /var/tmp/spdk-nbd.sock 00:18:48.045 18:46:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60017 ']' 00:18:48.045 18:46:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:48.045 18:46:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:48.045 18:46:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:48.045 18:46:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:48.045 18:46:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:18:48.304 18:46:17 event.app_repeat -- event/event.sh@39 -- # killprocess 60017 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60017 ']' 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60017 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60017 00:18:48.304 killing process with pid 60017 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60017' 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60017 00:18:48.304 18:46:17 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60017 00:18:49.747 spdk_app_start is called in Round 0. 00:18:49.747 Shutdown signal received, stop current app iteration 00:18:49.747 Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 reinitialization... 00:18:49.747 spdk_app_start is called in Round 1. 00:18:49.747 Shutdown signal received, stop current app iteration 00:18:49.748 Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 reinitialization... 00:18:49.748 spdk_app_start is called in Round 2. 00:18:49.748 Shutdown signal received, stop current app iteration 00:18:49.748 Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 reinitialization... 00:18:49.748 spdk_app_start is called in Round 3. 00:18:49.748 Shutdown signal received, stop current app iteration 00:18:49.748 18:46:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:18:49.748 18:46:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:18:49.748 00:18:49.748 real 0m21.870s 00:18:49.748 user 0m46.229s 00:18:49.748 sys 0m3.521s 00:18:49.748 18:46:18 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.748 18:46:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:18:49.748 ************************************ 00:18:49.748 END TEST app_repeat 00:18:49.748 ************************************ 00:18:49.748 18:46:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:18:49.748 18:46:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:49.748 18:46:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:49.748 18:46:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.748 18:46:18 event -- common/autotest_common.sh@10 -- # set +x 00:18:49.748 ************************************ 00:18:49.748 START TEST cpu_locks 00:18:49.748 ************************************ 00:18:49.748 18:46:18 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:50.010 * Looking for test storage... 00:18:50.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:50.010 18:46:18 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:50.011 18:46:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:50.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.011 --rc genhtml_branch_coverage=1 00:18:50.011 --rc genhtml_function_coverage=1 00:18:50.011 --rc genhtml_legend=1 00:18:50.011 --rc geninfo_all_blocks=1 00:18:50.011 --rc geninfo_unexecuted_blocks=1 00:18:50.011 00:18:50.011 ' 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:50.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.011 --rc genhtml_branch_coverage=1 00:18:50.011 --rc genhtml_function_coverage=1 00:18:50.011 --rc genhtml_legend=1 00:18:50.011 --rc geninfo_all_blocks=1 00:18:50.011 --rc geninfo_unexecuted_blocks=1 00:18:50.011 00:18:50.011 ' 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:50.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.011 --rc genhtml_branch_coverage=1 00:18:50.011 --rc genhtml_function_coverage=1 00:18:50.011 --rc genhtml_legend=1 00:18:50.011 --rc geninfo_all_blocks=1 00:18:50.011 --rc geninfo_unexecuted_blocks=1 00:18:50.011 00:18:50.011 ' 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:50.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:50.011 --rc genhtml_branch_coverage=1 00:18:50.011 --rc genhtml_function_coverage=1 00:18:50.011 --rc genhtml_legend=1 00:18:50.011 --rc geninfo_all_blocks=1 00:18:50.011 --rc geninfo_unexecuted_blocks=1 00:18:50.011 00:18:50.011 ' 00:18:50.011 18:46:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:18:50.011 18:46:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:18:50.011 18:46:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:18:50.011 18:46:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:50.011 18:46:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:50.011 ************************************ 00:18:50.011 START TEST default_locks 00:18:50.011 ************************************ 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60491 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60491 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60491 ']' 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.011 18:46:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:50.270 [2024-10-08 18:46:18.801529] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:50.270 [2024-10-08 18:46:18.801969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60491 ] 00:18:50.270 [2024-10-08 18:46:18.989701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.529 [2024-10-08 18:46:19.218999] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.466 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.466 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:18:51.466 18:46:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60491 00:18:51.466 18:46:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60491 00:18:51.466 18:46:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:52.034 18:46:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60491 00:18:52.034 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60491 ']' 00:18:52.034 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60491 00:18:52.034 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:18:52.034 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.292 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60491 00:18:52.292 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:52.292 killing process with pid 60491 00:18:52.292 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:52.292 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60491' 00:18:52.292 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60491 00:18:52.292 18:46:20 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60491 00:18:54.899 18:46:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60491 00:18:54.899 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:18:54.899 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60491 00:18:54.900 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:18:54.900 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.900 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60491 00:18:55.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.158 ERROR: process (pid: 60491) is no longer running 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60491 ']' 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:55.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60491) - No such process 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:55.158 00:18:55.158 real 0m5.004s 00:18:55.158 user 0m4.961s 00:18:55.158 sys 0m0.858s 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:55.158 18:46:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:18:55.158 ************************************ 00:18:55.158 END TEST default_locks 00:18:55.158 ************************************ 00:18:55.158 18:46:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:18:55.158 18:46:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:55.158 18:46:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:55.158 18:46:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:55.158 ************************************ 00:18:55.158 START TEST default_locks_via_rpc 00:18:55.158 ************************************ 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60572 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60572 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60572 ']' 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:55.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:55.158 18:46:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:55.158 [2024-10-08 18:46:23.864833] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:18:55.158 [2024-10-08 18:46:23.865215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60572 ] 00:18:55.416 [2024-10-08 18:46:24.049097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.674 [2024-10-08 18:46:24.281267] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60572 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60572 00:18:56.611 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60572 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60572 ']' 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60572 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60572 00:18:57.178 killing process with pid 60572 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60572' 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60572 00:18:57.178 18:46:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60572 00:19:00.496 ************************************ 00:19:00.496 END TEST default_locks_via_rpc 00:19:00.496 ************************************ 00:19:00.496 00:19:00.496 real 0m4.954s 00:19:00.496 user 0m5.109s 00:19:00.496 sys 0m0.842s 00:19:00.496 18:46:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:00.496 18:46:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:00.496 18:46:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:19:00.497 18:46:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:00.497 18:46:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:00.497 18:46:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:00.497 ************************************ 00:19:00.497 START TEST non_locking_app_on_locked_coremask 00:19:00.497 ************************************ 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60657 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60657 /var/tmp/spdk.sock 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60657 ']' 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.497 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:00.497 [2024-10-08 18:46:28.886345] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:00.497 [2024-10-08 18:46:28.886532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60657 ] 00:19:00.497 [2024-10-08 18:46:29.074114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.755 [2024-10-08 18:46:29.309709] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.689 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.689 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60684 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60684 /var/tmp/spdk2.sock 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60684 ']' 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:01.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.690 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:01.949 [2024-10-08 18:46:30.485439] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:01.949 [2024-10-08 18:46:30.486045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60684 ] 00:19:01.949 [2024-10-08 18:46:30.695347] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:01.949 [2024-10-08 18:46:30.695449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.515 [2024-10-08 18:46:31.159145] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.416 18:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:04.416 18:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:19:04.416 18:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60657 00:19:04.416 18:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60657 00:19:04.416 18:46:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60657 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60657 ']' 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60657 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60657 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.792 killing process with pid 60657 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60657' 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60657 00:19:05.792 18:46:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60657 00:19:12.354 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60684 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60684 ']' 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60684 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60684 00:19:12.355 killing process with pid 60684 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60684' 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60684 00:19:12.355 18:46:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60684 00:19:14.256 ************************************ 00:19:14.256 END TEST non_locking_app_on_locked_coremask 00:19:14.256 ************************************ 00:19:14.256 00:19:14.256 real 0m13.997s 00:19:14.256 user 0m14.513s 00:19:14.256 sys 0m1.802s 00:19:14.256 18:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:14.256 18:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:14.256 18:46:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:19:14.256 18:46:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:14.256 18:46:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:14.256 18:46:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:14.256 ************************************ 00:19:14.256 START TEST locking_app_on_unlocked_coremask 00:19:14.256 ************************************ 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60853 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60853 /var/tmp/spdk.sock 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60853 ']' 00:19:14.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.256 18:46:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:14.256 [2024-10-08 18:46:42.947504] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:14.256 [2024-10-08 18:46:42.947921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60853 ] 00:19:14.514 [2024-10-08 18:46:43.134859] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:14.514 [2024-10-08 18:46:43.135168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.772 [2024-10-08 18:46:43.355688] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60870 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60870 /var/tmp/spdk2.sock 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60870 ']' 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:15.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.706 18:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:15.706 [2024-10-08 18:46:44.442826] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:15.706 [2024-10-08 18:46:44.443316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60870 ] 00:19:15.964 [2024-10-08 18:46:44.636066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.532 [2024-10-08 18:46:45.105768] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.063 18:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.063 18:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:19:19.063 18:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60870 00:19:19.063 18:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:19.063 18:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60870 00:19:19.629 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60853 00:19:19.629 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60853 ']' 00:19:19.629 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60853 00:19:19.629 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:19:19.629 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:19.629 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60853 00:19:19.887 killing process with pid 60853 00:19:19.887 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:19.887 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:19.887 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60853' 00:19:19.887 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60853 00:19:19.887 18:46:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60853 00:19:26.473 18:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60870 00:19:26.473 18:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60870 ']' 00:19:26.473 18:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60870 00:19:26.473 18:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:19:26.473 18:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.473 18:46:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60870 00:19:26.473 killing process with pid 60870 00:19:26.473 18:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:26.473 18:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:26.473 18:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60870' 00:19:26.473 18:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60870 00:19:26.473 18:46:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60870 00:19:28.376 ************************************ 00:19:28.376 END TEST locking_app_on_unlocked_coremask 00:19:28.376 ************************************ 00:19:28.376 00:19:28.376 real 0m14.023s 00:19:28.376 user 0m14.626s 00:19:28.376 sys 0m1.685s 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:28.376 18:46:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:19:28.376 18:46:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:28.376 18:46:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:28.376 18:46:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:28.376 ************************************ 00:19:28.376 START TEST locking_app_on_locked_coremask 00:19:28.376 ************************************ 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:19:28.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61041 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61041 /var/tmp/spdk.sock 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61041 ']' 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.376 18:46:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:28.376 [2024-10-08 18:46:57.036977] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:28.376 [2024-10-08 18:46:57.037156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61041 ] 00:19:28.634 [2024-10-08 18:46:57.220804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.893 [2024-10-08 18:46:57.462051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61063 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61063 /var/tmp/spdk2.sock 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61063 /var/tmp/spdk2.sock 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:19:29.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61063 /var/tmp/spdk2.sock 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 61063 ']' 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.829 18:46:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:29.829 [2024-10-08 18:46:58.539949] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:29.829 [2024-10-08 18:46:58.540105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 00:19:30.089 [2024-10-08 18:46:58.717902] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61041 has claimed it. 00:19:30.089 [2024-10-08 18:46:58.717995] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:30.694 ERROR: process (pid: 61063) is no longer running 00:19:30.694 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61063) - No such process 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61041 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61041 00:19:30.694 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61041 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 61041 ']' 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 61041 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61041 00:19:31.260 killing process with pid 61041 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61041' 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 61041 00:19:31.260 18:46:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 61041 00:19:34.637 00:19:34.637 real 0m5.877s 00:19:34.637 user 0m6.170s 00:19:34.637 sys 0m1.005s 00:19:34.638 18:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.638 18:47:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:34.638 ************************************ 00:19:34.638 END TEST locking_app_on_locked_coremask 00:19:34.638 ************************************ 00:19:34.638 18:47:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:19:34.638 18:47:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:34.638 18:47:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:34.638 18:47:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:34.638 ************************************ 00:19:34.638 START TEST locking_overlapped_coremask 00:19:34.638 ************************************ 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61138 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61138 /var/tmp/spdk.sock 00:19:34.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61138 ']' 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.638 18:47:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:34.638 [2024-10-08 18:47:02.983374] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:34.638 [2024-10-08 18:47:02.983828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61138 ] 00:19:34.638 [2024-10-08 18:47:03.180223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:34.897 [2024-10-08 18:47:03.459462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.897 [2024-10-08 18:47:03.459536] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.897 [2024-10-08 18:47:03.459525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61167 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61167 /var/tmp/spdk2.sock 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 61167 /var/tmp/spdk2.sock 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 61167 /var/tmp/spdk2.sock 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 61167 ']' 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:35.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.834 18:47:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:36.127 [2024-10-08 18:47:04.618010] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:36.127 [2024-10-08 18:47:04.618444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61167 ] 00:19:36.127 [2024-10-08 18:47:04.815531] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61138 has claimed it. 00:19:36.127 [2024-10-08 18:47:04.815809] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:19:36.720 ERROR: process (pid: 61167) is no longer running 00:19:36.720 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (61167) - No such process 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:19:36.720 18:47:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61138 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 61138 ']' 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 61138 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61138 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61138' 00:19:36.721 killing process with pid 61138 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 61138 00:19:36.721 18:47:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 61138 00:19:40.007 ************************************ 00:19:40.008 END TEST locking_overlapped_coremask 00:19:40.008 ************************************ 00:19:40.008 00:19:40.008 real 0m5.453s 00:19:40.008 user 0m14.292s 00:19:40.008 sys 0m0.791s 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:19:40.008 18:47:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:19:40.008 18:47:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:40.008 18:47:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:40.008 18:47:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:40.008 ************************************ 00:19:40.008 START TEST locking_overlapped_coremask_via_rpc 00:19:40.008 ************************************ 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61242 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61242 /var/tmp/spdk.sock 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61242 ']' 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.008 18:47:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:40.008 [2024-10-08 18:47:08.448554] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:40.008 [2024-10-08 18:47:08.449149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61242 ] 00:19:40.008 [2024-10-08 18:47:08.625515] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:40.008 [2024-10-08 18:47:08.625821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:40.268 [2024-10-08 18:47:08.929554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.268 [2024-10-08 18:47:08.929637] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.268 [2024-10-08 18:47:08.929649] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.205 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.205 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:41.205 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61260 00:19:41.205 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:19:41.205 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61260 /var/tmp/spdk2.sock 00:19:41.205 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61260 ']' 00:19:41.205 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:41.205 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.467 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:41.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:41.467 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.467 18:47:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.467 [2024-10-08 18:47:10.102474] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:41.467 [2024-10-08 18:47:10.102930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61260 ] 00:19:41.803 [2024-10-08 18:47:10.309507] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:19:41.803 [2024-10-08 18:47:10.309769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:42.062 [2024-10-08 18:47:10.808809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.062 [2024-10-08 18:47:10.808860] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.062 [2024-10-08 18:47:10.808870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.594 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.594 [2024-10-08 18:47:12.895234] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61242 has claimed it. 00:19:44.594 request: 00:19:44.594 { 00:19:44.595 "method": "framework_enable_cpumask_locks", 00:19:44.595 "req_id": 1 00:19:44.595 } 00:19:44.595 Got JSON-RPC error response 00:19:44.595 response: 00:19:44.595 { 00:19:44.595 "code": -32603, 00:19:44.595 "message": "Failed to claim CPU core: 2" 00:19:44.595 } 00:19:44.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61242 /var/tmp/spdk.sock 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61242 ']' 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.595 18:47:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61260 /var/tmp/spdk2.sock 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61260 ']' 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:19:44.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.595 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.881 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.881 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:44.881 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:19:44.881 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:19:44.881 ************************************ 00:19:44.881 END TEST locking_overlapped_coremask_via_rpc 00:19:44.881 ************************************ 00:19:44.881 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:19:44.881 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:19:44.881 00:19:44.881 real 0m5.117s 00:19:44.881 user 0m1.751s 00:19:44.881 sys 0m0.291s 00:19:44.881 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:44.881 18:47:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:44.881 18:47:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:19:44.881 18:47:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61242 ]] 00:19:44.881 18:47:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61242 00:19:44.881 18:47:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61242 ']' 00:19:44.881 18:47:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61242 00:19:44.881 18:47:13 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:19:44.881 18:47:13 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.881 18:47:13 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61242 00:19:44.881 killing process with pid 61242 00:19:44.881 18:47:13 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:44.881 18:47:13 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:44.882 18:47:13 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61242' 00:19:44.882 18:47:13 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61242 00:19:44.882 18:47:13 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61242 00:19:48.176 18:47:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61260 ]] 00:19:48.176 18:47:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61260 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61260 ']' 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61260 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61260 00:19:48.176 killing process with pid 61260 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61260' 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61260 00:19:48.176 18:47:16 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61260 00:19:50.709 18:47:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:19:50.709 18:47:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:19:50.709 18:47:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61242 ]] 00:19:50.709 18:47:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61242 00:19:50.709 18:47:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61242 ']' 00:19:50.709 18:47:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61242 00:19:50.709 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61242) - No such process 00:19:50.709 Process with pid 61242 is not found 00:19:50.709 18:47:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61242 is not found' 00:19:50.709 18:47:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61260 ]] 00:19:50.709 18:47:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61260 00:19:50.709 18:47:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61260 ']' 00:19:50.709 18:47:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61260 00:19:50.709 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61260) - No such process 00:19:50.709 Process with pid 61260 is not found 00:19:50.709 18:47:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61260 is not found' 00:19:50.709 18:47:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:19:50.709 00:19:50.709 real 1m0.872s 00:19:50.709 user 1m42.261s 00:19:50.709 sys 0m8.583s 00:19:50.709 18:47:19 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.709 18:47:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:19:50.709 ************************************ 00:19:50.709 END TEST cpu_locks 00:19:50.709 ************************************ 00:19:50.709 00:19:50.710 real 1m37.450s 00:19:50.710 user 2m55.232s 00:19:50.710 sys 0m13.486s 00:19:50.710 18:47:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.710 ************************************ 00:19:50.710 END TEST event 00:19:50.710 18:47:19 event -- common/autotest_common.sh@10 -- # set +x 00:19:50.710 ************************************ 00:19:50.710 18:47:19 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:19:50.710 18:47:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:50.710 18:47:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.710 18:47:19 -- common/autotest_common.sh@10 -- # set +x 00:19:50.710 ************************************ 00:19:50.710 START TEST thread 00:19:50.710 ************************************ 00:19:50.710 18:47:19 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:19:50.968 * Looking for test storage... 00:19:50.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:50.968 18:47:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.968 18:47:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.968 18:47:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.968 18:47:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.968 18:47:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.968 18:47:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.968 18:47:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.968 18:47:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.968 18:47:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.968 18:47:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.968 18:47:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.968 18:47:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:19:50.968 18:47:19 thread -- scripts/common.sh@345 -- # : 1 00:19:50.968 18:47:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.968 18:47:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.968 18:47:19 thread -- scripts/common.sh@365 -- # decimal 1 00:19:50.968 18:47:19 thread -- scripts/common.sh@353 -- # local d=1 00:19:50.968 18:47:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.968 18:47:19 thread -- scripts/common.sh@355 -- # echo 1 00:19:50.968 18:47:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.968 18:47:19 thread -- scripts/common.sh@366 -- # decimal 2 00:19:50.968 18:47:19 thread -- scripts/common.sh@353 -- # local d=2 00:19:50.968 18:47:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.968 18:47:19 thread -- scripts/common.sh@355 -- # echo 2 00:19:50.968 18:47:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.968 18:47:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.968 18:47:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.968 18:47:19 thread -- scripts/common.sh@368 -- # return 0 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:50.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.968 --rc genhtml_branch_coverage=1 00:19:50.968 --rc genhtml_function_coverage=1 00:19:50.968 --rc genhtml_legend=1 00:19:50.968 --rc geninfo_all_blocks=1 00:19:50.968 --rc geninfo_unexecuted_blocks=1 00:19:50.968 00:19:50.968 ' 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:50.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.968 --rc genhtml_branch_coverage=1 00:19:50.968 --rc genhtml_function_coverage=1 00:19:50.968 --rc genhtml_legend=1 00:19:50.968 --rc geninfo_all_blocks=1 00:19:50.968 --rc geninfo_unexecuted_blocks=1 00:19:50.968 00:19:50.968 ' 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:50.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.968 --rc genhtml_branch_coverage=1 00:19:50.968 --rc genhtml_function_coverage=1 00:19:50.968 --rc genhtml_legend=1 00:19:50.968 --rc geninfo_all_blocks=1 00:19:50.968 --rc geninfo_unexecuted_blocks=1 00:19:50.968 00:19:50.968 ' 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:50.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.968 --rc genhtml_branch_coverage=1 00:19:50.968 --rc genhtml_function_coverage=1 00:19:50.968 --rc genhtml_legend=1 00:19:50.968 --rc geninfo_all_blocks=1 00:19:50.968 --rc geninfo_unexecuted_blocks=1 00:19:50.968 00:19:50.968 ' 00:19:50.968 18:47:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.968 18:47:19 thread -- common/autotest_common.sh@10 -- # set +x 00:19:50.968 ************************************ 00:19:50.968 START TEST thread_poller_perf 00:19:50.968 ************************************ 00:19:50.968 18:47:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:19:50.968 [2024-10-08 18:47:19.686997] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:50.969 [2024-10-08 18:47:19.687339] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61472 ] 00:19:51.227 [2024-10-08 18:47:19.854687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.484 [2024-10-08 18:47:20.114641] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.484 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:19:52.859 [2024-10-08T18:47:21.616Z] ====================================== 00:19:52.859 [2024-10-08T18:47:21.616Z] busy:2111375576 (cyc) 00:19:52.859 [2024-10-08T18:47:21.616Z] total_run_count: 358000 00:19:52.859 [2024-10-08T18:47:21.616Z] tsc_hz: 2100000000 (cyc) 00:19:52.859 [2024-10-08T18:47:21.616Z] ====================================== 00:19:52.859 [2024-10-08T18:47:21.616Z] poller_cost: 5897 (cyc), 2808 (nsec) 00:19:52.859 00:19:52.859 real 0m1.907s 00:19:52.859 user 0m1.670s 00:19:52.859 sys 0m0.124s 00:19:52.859 18:47:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:52.859 18:47:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:19:52.859 ************************************ 00:19:52.859 END TEST thread_poller_perf 00:19:52.859 ************************************ 00:19:52.859 18:47:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:19:52.859 18:47:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:19:52.859 18:47:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:52.859 18:47:21 thread -- common/autotest_common.sh@10 -- # set +x 00:19:52.859 ************************************ 00:19:52.859 START TEST thread_poller_perf 00:19:52.859 ************************************ 00:19:52.860 18:47:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:19:53.118 [2024-10-08 18:47:21.669724] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:53.118 [2024-10-08 18:47:21.669899] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61514 ] 00:19:53.118 [2024-10-08 18:47:21.858087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.685 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:19:53.685 [2024-10-08 18:47:22.163139] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.060 [2024-10-08T18:47:23.817Z] ====================================== 00:19:55.060 [2024-10-08T18:47:23.817Z] busy:2103678942 (cyc) 00:19:55.060 [2024-10-08T18:47:23.817Z] total_run_count: 4462000 00:19:55.060 [2024-10-08T18:47:23.817Z] tsc_hz: 2100000000 (cyc) 00:19:55.060 [2024-10-08T18:47:23.817Z] ====================================== 00:19:55.060 [2024-10-08T18:47:23.817Z] poller_cost: 471 (cyc), 224 (nsec) 00:19:55.060 00:19:55.060 real 0m1.983s 00:19:55.060 user 0m1.734s 00:19:55.060 sys 0m0.137s 00:19:55.060 18:47:23 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.060 ************************************ 00:19:55.060 END TEST thread_poller_perf 00:19:55.060 ************************************ 00:19:55.060 18:47:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:19:55.060 18:47:23 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:19:55.060 00:19:55.060 real 0m4.190s 00:19:55.060 user 0m3.538s 00:19:55.060 sys 0m0.431s 00:19:55.060 ************************************ 00:19:55.060 END TEST thread 00:19:55.060 ************************************ 00:19:55.060 18:47:23 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.060 18:47:23 thread -- common/autotest_common.sh@10 -- # set +x 00:19:55.060 18:47:23 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:19:55.060 18:47:23 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:19:55.060 18:47:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:55.060 18:47:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.060 18:47:23 -- common/autotest_common.sh@10 -- # set +x 00:19:55.060 ************************************ 00:19:55.060 START TEST app_cmdline 00:19:55.060 ************************************ 00:19:55.060 18:47:23 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:19:55.060 * Looking for test storage... 00:19:55.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:19:55.060 18:47:23 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:55.060 18:47:23 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:19:55.060 18:47:23 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:55.319 18:47:23 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@345 -- # : 1 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:55.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:55.320 18:47:23 app_cmdline -- scripts/common.sh@368 -- # return 0 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:55.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.320 --rc genhtml_branch_coverage=1 00:19:55.320 --rc genhtml_function_coverage=1 00:19:55.320 --rc genhtml_legend=1 00:19:55.320 --rc geninfo_all_blocks=1 00:19:55.320 --rc geninfo_unexecuted_blocks=1 00:19:55.320 00:19:55.320 ' 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:55.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.320 --rc genhtml_branch_coverage=1 00:19:55.320 --rc genhtml_function_coverage=1 00:19:55.320 --rc genhtml_legend=1 00:19:55.320 --rc geninfo_all_blocks=1 00:19:55.320 --rc geninfo_unexecuted_blocks=1 00:19:55.320 00:19:55.320 ' 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:55.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.320 --rc genhtml_branch_coverage=1 00:19:55.320 --rc genhtml_function_coverage=1 00:19:55.320 --rc genhtml_legend=1 00:19:55.320 --rc geninfo_all_blocks=1 00:19:55.320 --rc geninfo_unexecuted_blocks=1 00:19:55.320 00:19:55.320 ' 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:55.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.320 --rc genhtml_branch_coverage=1 00:19:55.320 --rc genhtml_function_coverage=1 00:19:55.320 --rc genhtml_legend=1 00:19:55.320 --rc geninfo_all_blocks=1 00:19:55.320 --rc geninfo_unexecuted_blocks=1 00:19:55.320 00:19:55.320 ' 00:19:55.320 18:47:23 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:19:55.320 18:47:23 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61603 00:19:55.320 18:47:23 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61603 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61603 ']' 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.320 18:47:23 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.320 18:47:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:55.320 [2024-10-08 18:47:24.056315] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:19:55.320 [2024-10-08 18:47:24.056709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61603 ] 00:19:55.578 [2024-10-08 18:47:24.240341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.836 [2024-10-08 18:47:24.477112] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:19:57.211 { 00:19:57.211 "version": "SPDK v25.01-pre git sha1 716daf683", 00:19:57.211 "fields": { 00:19:57.211 "major": 25, 00:19:57.211 "minor": 1, 00:19:57.211 "patch": 0, 00:19:57.211 "suffix": "-pre", 00:19:57.211 "commit": "716daf683" 00:19:57.211 } 00:19:57.211 } 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:19:57.211 18:47:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:57.211 18:47:25 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:57.510 request: 00:19:57.510 { 00:19:57.510 "method": "env_dpdk_get_mem_stats", 00:19:57.510 "req_id": 1 00:19:57.510 } 00:19:57.510 Got JSON-RPC error response 00:19:57.510 response: 00:19:57.510 { 00:19:57.510 "code": -32601, 00:19:57.510 "message": "Method not found" 00:19:57.510 } 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:57.510 18:47:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61603 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61603 ']' 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61603 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61603 00:19:57.510 killing process with pid 61603 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61603' 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@969 -- # kill 61603 00:19:57.510 18:47:26 app_cmdline -- common/autotest_common.sh@974 -- # wait 61603 00:20:00.796 00:20:00.796 real 0m5.306s 00:20:00.796 user 0m5.681s 00:20:00.796 sys 0m0.722s 00:20:00.796 18:47:29 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:00.796 ************************************ 00:20:00.796 END TEST app_cmdline 00:20:00.796 ************************************ 00:20:00.796 18:47:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:20:00.796 18:47:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:20:00.796 18:47:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:00.796 18:47:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:00.796 18:47:29 -- common/autotest_common.sh@10 -- # set +x 00:20:00.797 ************************************ 00:20:00.797 START TEST version 00:20:00.797 ************************************ 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:20:00.797 * Looking for test storage... 00:20:00.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1681 -- # lcov --version 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:00.797 18:47:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.797 18:47:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.797 18:47:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.797 18:47:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.797 18:47:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.797 18:47:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.797 18:47:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.797 18:47:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.797 18:47:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.797 18:47:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.797 18:47:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.797 18:47:29 version -- scripts/common.sh@344 -- # case "$op" in 00:20:00.797 18:47:29 version -- scripts/common.sh@345 -- # : 1 00:20:00.797 18:47:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.797 18:47:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.797 18:47:29 version -- scripts/common.sh@365 -- # decimal 1 00:20:00.797 18:47:29 version -- scripts/common.sh@353 -- # local d=1 00:20:00.797 18:47:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.797 18:47:29 version -- scripts/common.sh@355 -- # echo 1 00:20:00.797 18:47:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.797 18:47:29 version -- scripts/common.sh@366 -- # decimal 2 00:20:00.797 18:47:29 version -- scripts/common.sh@353 -- # local d=2 00:20:00.797 18:47:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.797 18:47:29 version -- scripts/common.sh@355 -- # echo 2 00:20:00.797 18:47:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.797 18:47:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.797 18:47:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.797 18:47:29 version -- scripts/common.sh@368 -- # return 0 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:00.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.797 --rc genhtml_branch_coverage=1 00:20:00.797 --rc genhtml_function_coverage=1 00:20:00.797 --rc genhtml_legend=1 00:20:00.797 --rc geninfo_all_blocks=1 00:20:00.797 --rc geninfo_unexecuted_blocks=1 00:20:00.797 00:20:00.797 ' 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:00.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.797 --rc genhtml_branch_coverage=1 00:20:00.797 --rc genhtml_function_coverage=1 00:20:00.797 --rc genhtml_legend=1 00:20:00.797 --rc geninfo_all_blocks=1 00:20:00.797 --rc geninfo_unexecuted_blocks=1 00:20:00.797 00:20:00.797 ' 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:00.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.797 --rc genhtml_branch_coverage=1 00:20:00.797 --rc genhtml_function_coverage=1 00:20:00.797 --rc genhtml_legend=1 00:20:00.797 --rc geninfo_all_blocks=1 00:20:00.797 --rc geninfo_unexecuted_blocks=1 00:20:00.797 00:20:00.797 ' 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:00.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.797 --rc genhtml_branch_coverage=1 00:20:00.797 --rc genhtml_function_coverage=1 00:20:00.797 --rc genhtml_legend=1 00:20:00.797 --rc geninfo_all_blocks=1 00:20:00.797 --rc geninfo_unexecuted_blocks=1 00:20:00.797 00:20:00.797 ' 00:20:00.797 18:47:29 version -- app/version.sh@17 -- # get_header_version major 00:20:00.797 18:47:29 version -- app/version.sh@14 -- # cut -f2 00:20:00.797 18:47:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:00.797 18:47:29 version -- app/version.sh@14 -- # tr -d '"' 00:20:00.797 18:47:29 version -- app/version.sh@17 -- # major=25 00:20:00.797 18:47:29 version -- app/version.sh@18 -- # get_header_version minor 00:20:00.797 18:47:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:00.797 18:47:29 version -- app/version.sh@14 -- # tr -d '"' 00:20:00.797 18:47:29 version -- app/version.sh@14 -- # cut -f2 00:20:00.797 18:47:29 version -- app/version.sh@18 -- # minor=1 00:20:00.797 18:47:29 version -- app/version.sh@19 -- # get_header_version patch 00:20:00.797 18:47:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:00.797 18:47:29 version -- app/version.sh@14 -- # cut -f2 00:20:00.797 18:47:29 version -- app/version.sh@14 -- # tr -d '"' 00:20:00.797 18:47:29 version -- app/version.sh@19 -- # patch=0 00:20:00.797 18:47:29 version -- app/version.sh@20 -- # get_header_version suffix 00:20:00.797 18:47:29 version -- app/version.sh@14 -- # cut -f2 00:20:00.797 18:47:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:20:00.797 18:47:29 version -- app/version.sh@14 -- # tr -d '"' 00:20:00.797 18:47:29 version -- app/version.sh@20 -- # suffix=-pre 00:20:00.797 18:47:29 version -- app/version.sh@22 -- # version=25.1 00:20:00.797 18:47:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:20:00.797 18:47:29 version -- app/version.sh@28 -- # version=25.1rc0 00:20:00.797 18:47:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:20:00.797 18:47:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:20:00.797 18:47:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:20:00.797 18:47:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:20:00.797 00:20:00.797 real 0m0.289s 00:20:00.797 user 0m0.173s 00:20:00.797 sys 0m0.158s 00:20:00.797 ************************************ 00:20:00.797 END TEST version 00:20:00.797 ************************************ 00:20:00.797 18:47:29 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:00.797 18:47:29 version -- common/autotest_common.sh@10 -- # set +x 00:20:00.797 18:47:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:20:00.797 18:47:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:20:00.797 18:47:29 -- spdk/autotest.sh@194 -- # uname -s 00:20:00.797 18:47:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:00.797 18:47:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:00.797 18:47:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:00.797 18:47:29 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:20:00.797 18:47:29 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:20:00.797 18:47:29 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:00.797 18:47:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:00.797 18:47:29 -- common/autotest_common.sh@10 -- # set +x 00:20:00.797 ************************************ 00:20:00.797 START TEST blockdev_nvme 00:20:00.797 ************************************ 00:20:00.797 18:47:29 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:20:00.797 * Looking for test storage... 00:20:00.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:00.797 18:47:29 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:00.797 18:47:29 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:20:00.797 18:47:29 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:01.055 18:47:29 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.055 18:47:29 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.056 18:47:29 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:01.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.056 --rc genhtml_branch_coverage=1 00:20:01.056 --rc genhtml_function_coverage=1 00:20:01.056 --rc genhtml_legend=1 00:20:01.056 --rc geninfo_all_blocks=1 00:20:01.056 --rc geninfo_unexecuted_blocks=1 00:20:01.056 00:20:01.056 ' 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:01.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.056 --rc genhtml_branch_coverage=1 00:20:01.056 --rc genhtml_function_coverage=1 00:20:01.056 --rc genhtml_legend=1 00:20:01.056 --rc geninfo_all_blocks=1 00:20:01.056 --rc geninfo_unexecuted_blocks=1 00:20:01.056 00:20:01.056 ' 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:01.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.056 --rc genhtml_branch_coverage=1 00:20:01.056 --rc genhtml_function_coverage=1 00:20:01.056 --rc genhtml_legend=1 00:20:01.056 --rc geninfo_all_blocks=1 00:20:01.056 --rc geninfo_unexecuted_blocks=1 00:20:01.056 00:20:01.056 ' 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:01.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.056 --rc genhtml_branch_coverage=1 00:20:01.056 --rc genhtml_function_coverage=1 00:20:01.056 --rc genhtml_legend=1 00:20:01.056 --rc geninfo_all_blocks=1 00:20:01.056 --rc geninfo_unexecuted_blocks=1 00:20:01.056 00:20:01.056 ' 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:01.056 18:47:29 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61802 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61802 00:20:01.056 18:47:29 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 61802 ']' 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.056 18:47:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.056 [2024-10-08 18:47:29.786854] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:01.056 [2024-10-08 18:47:29.787131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61802 ] 00:20:01.314 [2024-10-08 18:47:29.992633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.572 [2024-10-08 18:47:30.232411] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.506 18:47:31 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.506 18:47:31 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:20:02.506 18:47:31 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:02.506 18:47:31 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:20:02.506 18:47:31 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:20:02.506 18:47:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:20:02.506 18:47:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:02.764 18:47:31 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:20:02.764 18:47:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.764 18:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:03.023 18:47:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:03.023 18:47:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:03.024 18:47:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8b9e6f9e-cc5d-4f7a-8357-1605b750bd69"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8b9e6f9e-cc5d-4f7a-8357-1605b750bd69",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f04d7feb-5216-47c6-82d4-24f26884dd9b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f04d7feb-5216-47c6-82d4-24f26884dd9b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e3501fc4-a639-4946-9663-31c502724e78"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3501fc4-a639-4946-9663-31c502724e78",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "148e01ed-4653-4037-817e-c51d57cf767b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "148e01ed-4653-4037-817e-c51d57cf767b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "dae82e25-6fa2-47f7-b9ca-1288a4e927d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dae82e25-6fa2-47f7-b9ca-1288a4e927d4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "604f29a3-d0fc-4dcf-a44d-fcfc5203f3f9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "604f29a3-d0fc-4dcf-a44d-fcfc5203f3f9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:20:03.282 18:47:31 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:03.282 18:47:31 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:20:03.282 18:47:31 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:03.282 18:47:31 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61802 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 61802 ']' 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 61802 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61802 00:20:03.282 killing process with pid 61802 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61802' 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 61802 00:20:03.282 18:47:31 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 61802 00:20:05.883 18:47:34 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:05.883 18:47:34 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:20:05.883 18:47:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:20:05.883 18:47:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:05.883 18:47:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:05.883 ************************************ 00:20:05.883 START TEST bdev_hello_world 00:20:05.883 ************************************ 00:20:05.883 18:47:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:20:06.145 [2024-10-08 18:47:34.729519] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:06.145 [2024-10-08 18:47:34.729705] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61903 ] 00:20:06.403 [2024-10-08 18:47:34.910038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.403 [2024-10-08 18:47:35.138903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.341 [2024-10-08 18:47:35.806282] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:07.341 [2024-10-08 18:47:35.806512] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:20:07.341 [2024-10-08 18:47:35.806552] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:07.341 [2024-10-08 18:47:35.809968] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:07.341 [2024-10-08 18:47:35.810481] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:07.341 [2024-10-08 18:47:35.810520] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:07.341 [2024-10-08 18:47:35.810701] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:07.341 00:20:07.341 [2024-10-08 18:47:35.810726] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:08.717 00:20:08.717 real 0m2.591s 00:20:08.717 user 0m2.177s 00:20:08.717 sys 0m0.304s 00:20:08.717 18:47:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:08.717 ************************************ 00:20:08.717 END TEST bdev_hello_world 00:20:08.717 ************************************ 00:20:08.717 18:47:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:08.717 18:47:37 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:08.717 18:47:37 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:08.717 18:47:37 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:08.717 18:47:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:08.717 ************************************ 00:20:08.717 START TEST bdev_bounds 00:20:08.717 ************************************ 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:20:08.717 Process bdevio pid: 61956 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61956 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61956' 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61956 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61956 ']' 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.717 18:47:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:08.717 [2024-10-08 18:47:37.360291] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:08.717 [2024-10-08 18:47:37.360669] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61956 ] 00:20:08.980 [2024-10-08 18:47:37.527939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:09.240 [2024-10-08 18:47:37.774655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.240 [2024-10-08 18:47:37.774816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.240 [2024-10-08 18:47:37.774836] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.805 18:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.805 18:47:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:20:09.805 18:47:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:10.062 I/O targets: 00:20:10.062 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:10.062 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:10.062 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:10.062 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:10.062 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:10.062 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:10.062 00:20:10.062 00:20:10.062 CUnit - A unit testing framework for C - Version 2.1-3 00:20:10.062 http://cunit.sourceforge.net/ 00:20:10.062 00:20:10.062 00:20:10.062 Suite: bdevio tests on: Nvme3n1 00:20:10.062 Test: blockdev write read block ...passed 00:20:10.062 Test: blockdev write zeroes read block ...passed 00:20:10.062 Test: blockdev write zeroes read no split ...passed 00:20:10.062 Test: blockdev write zeroes read split ...passed 00:20:10.062 Test: blockdev write zeroes read split partial ...passed 00:20:10.062 Test: blockdev reset ...[2024-10-08 18:47:38.724976] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:20:10.062 passed 00:20:10.062 Test: blockdev write read 8 blocks ...[2024-10-08 18:47:38.729606] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.062 passed 00:20:10.062 Test: blockdev write read size > 128k ...passed 00:20:10.062 Test: blockdev write read invalid size ...passed 00:20:10.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.062 Test: blockdev write read max offset ...passed 00:20:10.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.062 Test: blockdev writev readv 8 blocks ...passed 00:20:10.062 Test: blockdev writev readv 30 x 1block ...passed 00:20:10.062 Test: blockdev writev readv block ...passed 00:20:10.062 Test: blockdev writev readv size > 128k ...passed 00:20:10.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:10.062 Test: blockdev comparev and writev ...[2024-10-08 18:47:38.739264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:20:10.062 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2aca0a000 len:0x1000 00:20:10.062 [2024-10-08 18:47:38.739457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:10.062 passed 00:20:10.062 Test: blockdev nvme passthru vendor specific ...passed 00:20:10.062 Test: blockdev nvme admin passthru ...[2024-10-08 18:47:38.740213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:10.062 [2024-10-08 18:47:38.740260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:10.062 passed 00:20:10.062 Test: blockdev copy ...passed 00:20:10.062 Suite: bdevio tests on: Nvme2n3 00:20:10.062 Test: blockdev write read block ...passed 00:20:10.062 Test: blockdev write zeroes read block ...passed 00:20:10.062 Test: blockdev write zeroes read no split ...passed 00:20:10.062 Test: blockdev write zeroes read split ...passed 00:20:10.322 Test: blockdev write zeroes read split partial ...passed 00:20:10.323 Test: blockdev reset ...[2024-10-08 18:47:38.820764] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:10.323 [2024-10-08 18:47:38.825530] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.323 passed 00:20:10.323 Test: blockdev write read 8 blocks ...passed 00:20:10.323 Test: blockdev write read size > 128k ...passed 00:20:10.323 Test: blockdev write read invalid size ...passed 00:20:10.323 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.323 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.323 Test: blockdev write read max offset ...passed 00:20:10.323 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.323 Test: blockdev writev readv 8 blocks ...passed 00:20:10.323 Test: blockdev writev readv 30 x 1block ...passed 00:20:10.323 Test: blockdev writev readv block ...passed 00:20:10.323 Test: blockdev writev readv size > 128k ...passed 00:20:10.323 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:10.323 Test: blockdev comparev and writev ...[2024-10-08 18:47:38.836113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x290a04000 len:0x1000 00:20:10.323 [2024-10-08 18:47:38.836310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:10.323 passed 00:20:10.323 Test: blockdev nvme passthru rw ...passed 00:20:10.323 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:47:38.837261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:10.323 [2024-10-08 18:47:38.837421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:20:10.323 00:20:10.323 Test: blockdev nvme admin passthru ...passed 00:20:10.323 Test: blockdev copy ...passed 00:20:10.323 Suite: bdevio tests on: Nvme2n2 00:20:10.323 Test: blockdev write read block ...passed 00:20:10.323 Test: blockdev write zeroes read block ...passed 00:20:10.323 Test: blockdev write zeroes read no split ...passed 00:20:10.323 Test: blockdev write zeroes read split ...passed 00:20:10.323 Test: blockdev write zeroes read split partial ...passed 00:20:10.323 Test: blockdev reset ...[2024-10-08 18:47:38.914382] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:10.323 [2024-10-08 18:47:38.918896] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.323 passed 00:20:10.323 Test: blockdev write read 8 blocks ...passed 00:20:10.323 Test: blockdev write read size > 128k ...passed 00:20:10.323 Test: blockdev write read invalid size ...passed 00:20:10.323 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.323 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.323 Test: blockdev write read max offset ...passed 00:20:10.323 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.323 Test: blockdev writev readv 8 blocks ...passed 00:20:10.323 Test: blockdev writev readv 30 x 1block ...passed 00:20:10.323 Test: blockdev writev readv block ...passed 00:20:10.323 Test: blockdev writev readv size > 128k ...passed 00:20:10.323 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:10.323 Test: blockdev comparev and writev ...[2024-10-08 18:47:38.927435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c163a000 len:0x1000 00:20:10.323 [2024-10-08 18:47:38.927490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:10.323 passed 00:20:10.323 Test: blockdev nvme passthru rw ...passed 00:20:10.323 Test: blockdev nvme passthru vendor specific ...passed 00:20:10.323 Test: blockdev nvme admin passthru ...[2024-10-08 18:47:38.928187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:10.323 [2024-10-08 18:47:38.928226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:10.323 passed 00:20:10.323 Test: blockdev copy ...passed 00:20:10.323 Suite: bdevio tests on: Nvme2n1 00:20:10.323 Test: blockdev write read block ...passed 00:20:10.323 Test: blockdev write zeroes read block ...passed 00:20:10.323 Test: blockdev write zeroes read no split ...passed 00:20:10.323 Test: blockdev write zeroes read split ...passed 00:20:10.323 Test: blockdev write zeroes read split partial ...passed 00:20:10.323 Test: blockdev reset ...[2024-10-08 18:47:39.014788] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:20:10.323 [2024-10-08 18:47:39.020120] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.323 passed 00:20:10.323 Test: blockdev write read 8 blocks ...passed 00:20:10.323 Test: blockdev write read size > 128k ...passed 00:20:10.323 Test: blockdev write read invalid size ...passed 00:20:10.323 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.323 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.323 Test: blockdev write read max offset ...passed 00:20:10.323 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.323 Test: blockdev writev readv 8 blocks ...passed 00:20:10.323 Test: blockdev writev readv 30 x 1block ...passed 00:20:10.323 Test: blockdev writev readv block ...passed 00:20:10.323 Test: blockdev writev readv size > 128k ...passed 00:20:10.323 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:10.323 Test: blockdev comparev and writev ...[2024-10-08 18:47:39.033223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1634000 len:0x1000 00:20:10.323 [2024-10-08 18:47:39.033297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:10.323 passed 00:20:10.323 Test: blockdev nvme passthru rw ...passed 00:20:10.323 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:47:39.033992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:20:10.323 [2024-10-08 18:47:39.034152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:10.323 passed 00:20:10.323 Test: blockdev nvme admin passthru ...passed 00:20:10.323 Test: blockdev copy ...passed 00:20:10.323 Suite: bdevio tests on: Nvme1n1 00:20:10.323 Test: blockdev write read block ...passed 00:20:10.323 Test: blockdev write zeroes read block ...passed 00:20:10.323 Test: blockdev write zeroes read no split ...passed 00:20:10.323 Test: blockdev write zeroes read split ...passed 00:20:10.581 Test: blockdev write zeroes read split partial ...passed 00:20:10.581 Test: blockdev reset ...[2024-10-08 18:47:39.108336] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:20:10.581 [2024-10-08 18:47:39.112627] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.581 passed 00:20:10.581 Test: blockdev write read 8 blocks ...passed 00:20:10.581 Test: blockdev write read size > 128k ...passed 00:20:10.581 Test: blockdev write read invalid size ...passed 00:20:10.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.581 Test: blockdev write read max offset ...passed 00:20:10.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.581 Test: blockdev writev readv 8 blocks ...passed 00:20:10.581 Test: blockdev writev readv 30 x 1block ...passed 00:20:10.581 Test: blockdev writev readv block ...passed 00:20:10.581 Test: blockdev writev readv size > 128k ...passed 00:20:10.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:10.581 Test: blockdev comparev and writev ...[2024-10-08 18:47:39.121987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1630000 len:0x1000 00:20:10.581 [2024-10-08 18:47:39.122050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:20:10.581 passed 00:20:10.581 Test: blockdev nvme passthru rw ...passed 00:20:10.581 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:47:39.122982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:20:10.581 Test: blockdev nvme admin passthru ...RP2 0x0 00:20:10.581 [2024-10-08 18:47:39.123285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:20:10.581 passed 00:20:10.581 Test: blockdev copy ...passed 00:20:10.581 Suite: bdevio tests on: Nvme0n1 00:20:10.581 Test: blockdev write read block ...passed 00:20:10.581 Test: blockdev write zeroes read block ...passed 00:20:10.581 Test: blockdev write zeroes read no split ...passed 00:20:10.581 Test: blockdev write zeroes read split ...passed 00:20:10.581 Test: blockdev write zeroes read split partial ...passed 00:20:10.581 Test: blockdev reset ...[2024-10-08 18:47:39.204041] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:20:10.581 [2024-10-08 18:47:39.208808] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.581 passed 00:20:10.581 Test: blockdev write read 8 blocks ...passed 00:20:10.581 Test: blockdev write read size > 128k ...passed 00:20:10.581 Test: blockdev write read invalid size ...passed 00:20:10.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.581 Test: blockdev write read max offset ...passed 00:20:10.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.581 Test: blockdev writev readv 8 blocks ...passed 00:20:10.581 Test: blockdev writev readv 30 x 1block ...passed 00:20:10.581 Test: blockdev writev readv block ...passed 00:20:10.581 Test: blockdev writev readv size > 128k ...passed 00:20:10.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:10.581 Test: blockdev comparev and writev ...passed 00:20:10.581 Test: blockdev nvme passthru rw ...[2024-10-08 18:47:39.224569] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:20:10.581 separate metadata which is not supported yet. 00:20:10.581 passed 00:20:10.581 Test: blockdev nvme passthru vendor specific ...passed 00:20:10.581 Test: blockdev nvme admin passthru ...[2024-10-08 18:47:39.225013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:20:10.581 [2024-10-08 18:47:39.225070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:20:10.581 passed 00:20:10.581 Test: blockdev copy ...passed 00:20:10.581 00:20:10.581 Run Summary: Type Total Ran Passed Failed Inactive 00:20:10.581 suites 6 6 n/a 0 0 00:20:10.581 tests 138 138 138 0 0 00:20:10.581 asserts 893 893 893 0 n/a 00:20:10.581 00:20:10.581 Elapsed time = 1.590 seconds 00:20:10.581 0 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61956 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61956 ']' 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61956 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61956 00:20:10.581 killing process with pid 61956 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61956' 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61956 00:20:10.581 18:47:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61956 00:20:11.958 18:47:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:11.958 00:20:11.958 real 0m3.236s 00:20:11.958 user 0m8.248s 00:20:11.958 sys 0m0.431s 00:20:11.958 18:47:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:11.958 18:47:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:11.958 ************************************ 00:20:11.958 END TEST bdev_bounds 00:20:11.958 ************************************ 00:20:11.958 18:47:40 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:20:11.958 18:47:40 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:11.958 18:47:40 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:11.958 18:47:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:11.958 ************************************ 00:20:11.958 START TEST bdev_nbd 00:20:11.958 ************************************ 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62021 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62021 /var/tmp/spdk-nbd.sock 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 62021 ']' 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.958 18:47:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:11.958 [2024-10-08 18:47:40.676795] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:11.958 [2024-10-08 18:47:40.677698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.216 [2024-10-08 18:47:40.850697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.474 [2024-10-08 18:47:41.078288] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:13.061 18:47:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.627 1+0 records in 00:20:13.627 1+0 records out 00:20:13.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432097 s, 9.5 MB/s 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:13.627 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.886 1+0 records in 00:20:13.886 1+0 records out 00:20:13.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577804 s, 7.1 MB/s 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:13.886 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.144 1+0 records in 00:20:14.144 1+0 records out 00:20:14.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676736 s, 6.1 MB/s 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:14.144 18:47:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.403 1+0 records in 00:20:14.403 1+0 records out 00:20:14.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666457 s, 6.1 MB/s 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:14.403 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.968 1+0 records in 00:20:14.968 1+0 records out 00:20:14.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741381 s, 5.5 MB/s 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.968 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:14.969 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:14.969 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:14.969 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:14.969 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:15.226 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:15.226 1+0 records in 00:20:15.227 1+0 records out 00:20:15.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787673 s, 5.2 MB/s 00:20:15.227 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:15.227 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:15.227 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:15.227 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:15.227 18:47:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:15.227 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:15.227 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:15.227 18:47:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd0", 00:20:15.793 "bdev_name": "Nvme0n1" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd1", 00:20:15.793 "bdev_name": "Nvme1n1" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd2", 00:20:15.793 "bdev_name": "Nvme2n1" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd3", 00:20:15.793 "bdev_name": "Nvme2n2" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd4", 00:20:15.793 "bdev_name": "Nvme2n3" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd5", 00:20:15.793 "bdev_name": "Nvme3n1" 00:20:15.793 } 00:20:15.793 ]' 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd0", 00:20:15.793 "bdev_name": "Nvme0n1" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd1", 00:20:15.793 "bdev_name": "Nvme1n1" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd2", 00:20:15.793 "bdev_name": "Nvme2n1" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd3", 00:20:15.793 "bdev_name": "Nvme2n2" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd4", 00:20:15.793 "bdev_name": "Nvme2n3" 00:20:15.793 }, 00:20:15.793 { 00:20:15.793 "nbd_device": "/dev/nbd5", 00:20:15.793 "bdev_name": "Nvme3n1" 00:20:15.793 } 00:20:15.793 ]' 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:15.793 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.051 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.309 18:47:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:16.568 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:17.136 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:17.395 18:47:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:17.654 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:17.912 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:17.913 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:17.913 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:17.913 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:17.913 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:17.913 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:17.913 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:17.913 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:17.913 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:20:18.171 /dev/nbd0 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.171 1+0 records in 00:20:18.171 1+0 records out 00:20:18.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612749 s, 6.7 MB/s 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:18.171 18:47:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:20:18.428 /dev/nbd1 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.428 1+0 records in 00:20:18.428 1+0 records out 00:20:18.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571036 s, 7.2 MB/s 00:20:18.428 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.687 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:18.687 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.687 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:18.687 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:18.687 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.687 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:18.687 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:20:18.687 /dev/nbd10 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:18.946 1+0 records in 00:20:18.946 1+0 records out 00:20:18.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606815 s, 6.7 MB/s 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:18.946 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:20:19.204 /dev/nbd11 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.204 1+0 records in 00:20:19.204 1+0 records out 00:20:19.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676582 s, 6.1 MB/s 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:19.204 18:47:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:20:19.462 /dev/nbd12 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.462 1+0 records in 00:20:19.462 1+0 records out 00:20:19.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566902 s, 7.2 MB/s 00:20:19.462 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.463 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:19.463 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.463 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:19.463 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:19.463 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.463 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:19.463 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:20:20.031 /dev/nbd13 00:20:20.031 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:20.031 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:20.031 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:20:20.031 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:20.031 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:20.031 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:20.031 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:20:20.031 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:20.032 1+0 records in 00:20:20.032 1+0 records out 00:20:20.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113996 s, 3.6 MB/s 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:20.032 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:20.289 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd0", 00:20:20.289 "bdev_name": "Nvme0n1" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd1", 00:20:20.289 "bdev_name": "Nvme1n1" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd10", 00:20:20.289 "bdev_name": "Nvme2n1" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd11", 00:20:20.289 "bdev_name": "Nvme2n2" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd12", 00:20:20.289 "bdev_name": "Nvme2n3" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd13", 00:20:20.289 "bdev_name": "Nvme3n1" 00:20:20.289 } 00:20:20.289 ]' 00:20:20.289 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd0", 00:20:20.289 "bdev_name": "Nvme0n1" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd1", 00:20:20.289 "bdev_name": "Nvme1n1" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd10", 00:20:20.289 "bdev_name": "Nvme2n1" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd11", 00:20:20.289 "bdev_name": "Nvme2n2" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd12", 00:20:20.289 "bdev_name": "Nvme2n3" 00:20:20.289 }, 00:20:20.289 { 00:20:20.289 "nbd_device": "/dev/nbd13", 00:20:20.289 "bdev_name": "Nvme3n1" 00:20:20.289 } 00:20:20.289 ]' 00:20:20.289 18:47:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:20.289 /dev/nbd1 00:20:20.289 /dev/nbd10 00:20:20.289 /dev/nbd11 00:20:20.289 /dev/nbd12 00:20:20.289 /dev/nbd13' 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:20.289 /dev/nbd1 00:20:20.289 /dev/nbd10 00:20:20.289 /dev/nbd11 00:20:20.289 /dev/nbd12 00:20:20.289 /dev/nbd13' 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:20.289 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:20.548 256+0 records in 00:20:20.548 256+0 records out 00:20:20.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00867931 s, 121 MB/s 00:20:20.548 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:20.548 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:20.548 256+0 records in 00:20:20.548 256+0 records out 00:20:20.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126437 s, 8.3 MB/s 00:20:20.548 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:20.548 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:20.805 256+0 records in 00:20:20.805 256+0 records out 00:20:20.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13345 s, 7.9 MB/s 00:20:20.805 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:20.805 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:20.805 256+0 records in 00:20:20.805 256+0 records out 00:20:20.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133686 s, 7.8 MB/s 00:20:20.805 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:20.805 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:21.062 256+0 records in 00:20:21.062 256+0 records out 00:20:21.062 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143796 s, 7.3 MB/s 00:20:21.062 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:21.062 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:21.062 256+0 records in 00:20:21.062 256+0 records out 00:20:21.062 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134167 s, 7.8 MB/s 00:20:21.062 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:21.062 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:21.321 256+0 records in 00:20:21.321 256+0 records out 00:20:21.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134523 s, 7.8 MB/s 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:21.321 18:47:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:21.580 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:21.581 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.147 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.406 18:47:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.664 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.922 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:23.180 18:47:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:23.438 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:24.005 malloc_lvol_verify 00:20:24.005 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:24.263 3d735fe0-d4e7-4393-b0e7-2df7155f0358 00:20:24.263 18:47:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:24.522 2efa1a58-4702-4638-89da-2b3bc335ee45 00:20:24.522 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:24.781 /dev/nbd0 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:24.781 mke2fs 1.47.0 (5-Feb-2023) 00:20:24.781 Discarding device blocks: 0/4096 done 00:20:24.781 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:24.781 00:20:24.781 Allocating group tables: 0/1 done 00:20:24.781 Writing inode tables: 0/1 done 00:20:24.781 Creating journal (1024 blocks): done 00:20:24.781 Writing superblocks and filesystem accounting information: 0/1 done 00:20:24.781 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.781 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62021 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 62021 ']' 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 62021 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62021 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:25.349 killing process with pid 62021 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62021' 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 62021 00:20:25.349 18:47:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 62021 00:20:26.724 ************************************ 00:20:26.724 END TEST bdev_nbd 00:20:26.724 ************************************ 00:20:26.724 18:47:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:26.724 00:20:26.724 real 0m14.814s 00:20:26.724 user 0m20.120s 00:20:26.724 sys 0m5.719s 00:20:26.724 18:47:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:26.724 18:47:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:26.724 18:47:55 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:26.724 18:47:55 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:20:26.724 skipping fio tests on NVMe due to multi-ns failures. 00:20:26.724 18:47:55 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:20:26.724 18:47:55 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:26.724 18:47:55 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:26.724 18:47:55 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:26.724 18:47:55 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:26.724 18:47:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:26.724 ************************************ 00:20:26.724 START TEST bdev_verify 00:20:26.724 ************************************ 00:20:26.724 18:47:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:26.982 [2024-10-08 18:47:55.529371] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:26.982 [2024-10-08 18:47:55.529507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62451 ] 00:20:26.982 [2024-10-08 18:47:55.695786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:27.240 [2024-10-08 18:47:55.934655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.240 [2024-10-08 18:47:55.934661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.177 Running I/O for 5 seconds... 00:20:30.544 17024.00 IOPS, 66.50 MiB/s [2024-10-08T18:48:00.238Z] 16864.00 IOPS, 65.88 MiB/s [2024-10-08T18:48:01.173Z] 16917.33 IOPS, 66.08 MiB/s [2024-10-08T18:48:02.110Z] 17168.00 IOPS, 67.06 MiB/s [2024-10-08T18:48:02.110Z] 17100.80 IOPS, 66.80 MiB/s 00:20:33.353 Latency(us) 00:20:33.353 [2024-10-08T18:48:02.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.353 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x0 length 0xbd0bd 00:20:33.353 Nvme0n1 : 5.09 1359.25 5.31 0.00 0.00 93944.94 16103.13 127826.41 00:20:33.353 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:33.353 Nvme0n1 : 5.08 1460.49 5.71 0.00 0.00 87374.38 15603.81 80890.15 00:20:33.353 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x0 length 0xa0000 00:20:33.353 Nvme1n1 : 5.09 1358.90 5.31 0.00 0.00 93795.00 14043.43 124331.15 00:20:33.353 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0xa0000 length 0xa0000 00:20:33.353 Nvme1n1 : 5.09 1459.67 5.70 0.00 0.00 87240.55 18100.42 77394.90 00:20:33.353 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x0 length 0x80000 00:20:33.353 Nvme2n1 : 5.09 1357.94 5.30 0.00 0.00 93601.70 15478.98 115343.36 00:20:33.353 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x80000 length 0x80000 00:20:33.353 Nvme2n1 : 5.09 1459.03 5.70 0.00 0.00 87131.46 20222.54 74398.96 00:20:33.353 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x0 length 0x80000 00:20:33.353 Nvme2n2 : 5.09 1357.48 5.30 0.00 0.00 93433.14 15728.64 110350.14 00:20:33.353 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x80000 length 0x80000 00:20:33.353 Nvme2n2 : 5.09 1458.67 5.70 0.00 0.00 86995.02 21346.01 73400.32 00:20:33.353 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x0 length 0x80000 00:20:33.353 Nvme2n3 : 5.09 1357.01 5.30 0.00 0.00 93265.58 15853.47 115842.68 00:20:33.353 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x80000 length 0x80000 00:20:33.353 Nvme2n3 : 5.09 1458.18 5.70 0.00 0.00 86848.80 18599.74 76895.57 00:20:33.353 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x0 length 0x20000 00:20:33.353 Nvme3n1 : 5.10 1356.54 5.30 0.00 0.00 93107.43 16352.79 121335.22 00:20:33.353 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:33.353 Verification LBA range: start 0x20000 length 0x20000 00:20:33.353 Nvme3n1 : 5.09 1457.68 5.69 0.00 0.00 86699.70 14979.66 79891.50 00:20:33.353 [2024-10-08T18:48:02.110Z] =================================================================================================================== 00:20:33.353 [2024-10-08T18:48:02.110Z] Total : 16900.85 66.02 0.00 0.00 90170.83 14043.43 127826.41 00:20:35.262 ************************************ 00:20:35.262 END TEST bdev_verify 00:20:35.262 ************************************ 00:20:35.262 00:20:35.262 real 0m8.112s 00:20:35.262 user 0m14.710s 00:20:35.262 sys 0m0.322s 00:20:35.262 18:48:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:35.262 18:48:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:35.262 18:48:03 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:35.262 18:48:03 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:35.262 18:48:03 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:35.262 18:48:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:35.262 ************************************ 00:20:35.262 START TEST bdev_verify_big_io 00:20:35.262 ************************************ 00:20:35.262 18:48:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:35.262 [2024-10-08 18:48:03.714885] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:35.262 [2024-10-08 18:48:03.715261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62555 ] 00:20:35.262 [2024-10-08 18:48:03.885897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:35.520 [2024-10-08 18:48:04.205511] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.520 [2024-10-08 18:48:04.205543] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.455 Running I/O for 5 seconds... 00:20:39.553 1373.00 IOPS, 85.81 MiB/s [2024-10-08T18:48:09.691Z] 1568.00 IOPS, 98.00 MiB/s [2024-10-08T18:48:11.067Z] 1561.67 IOPS, 97.60 MiB/s [2024-10-08T18:48:11.067Z] 1736.50 IOPS, 108.53 MiB/s [2024-10-08T18:48:11.067Z] 1972.20 IOPS, 123.26 MiB/s 00:20:42.310 Latency(us) 00:20:42.310 [2024-10-08T18:48:11.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.310 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x0 length 0xbd0b 00:20:42.310 Nvme0n1 : 5.60 137.11 8.57 0.00 0.00 901341.46 20597.03 982665.51 00:20:42.310 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:42.310 Nvme0n1 : 5.56 126.67 7.92 0.00 0.00 966662.21 25715.08 1374133.88 00:20:42.310 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x0 length 0xa000 00:20:42.310 Nvme1n1 : 5.70 135.53 8.47 0.00 0.00 878073.56 56922.70 978670.93 00:20:42.310 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0xa000 length 0xa000 00:20:42.310 Nvme1n1 : 5.69 139.42 8.71 0.00 0.00 858854.77 40694.74 786931.32 00:20:42.310 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x0 length 0x8000 00:20:42.310 Nvme2n1 : 5.71 139.85 8.74 0.00 0.00 841666.83 98366.42 982665.51 00:20:42.310 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x8000 length 0x8000 00:20:42.310 Nvme2n1 : 5.73 137.57 8.60 0.00 0.00 853749.53 86382.69 1470003.69 00:20:42.310 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x0 length 0x8000 00:20:42.310 Nvme2n2 : 5.73 145.17 9.07 0.00 0.00 795851.97 22219.82 834866.22 00:20:42.310 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x8000 length 0x8000 00:20:42.310 Nvme2n2 : 5.73 138.89 8.68 0.00 0.00 824449.47 85883.37 1485981.99 00:20:42.310 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x0 length 0x8000 00:20:42.310 Nvme2n3 : 5.78 150.66 9.42 0.00 0.00 748193.03 37698.80 874811.98 00:20:42.310 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x8000 length 0x8000 00:20:42.310 Nvme2n3 : 5.79 152.12 9.51 0.00 0.00 733160.77 31207.62 1102502.77 00:20:42.310 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x0 length 0x2000 00:20:42.310 Nvme3n1 : 5.79 158.79 9.92 0.00 0.00 693895.24 2933.52 902774.00 00:20:42.310 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:42.310 Verification LBA range: start 0x2000 length 0x2000 00:20:42.310 Nvme3n1 : 5.83 171.88 10.74 0.00 0.00 633573.94 4431.48 1118481.07 00:20:42.310 [2024-10-08T18:48:11.067Z] =================================================================================================================== 00:20:42.310 [2024-10-08T18:48:11.067Z] Total : 1733.65 108.35 0.00 0.00 802571.29 2933.52 1485981.99 00:20:44.919 ************************************ 00:20:44.919 END TEST bdev_verify_big_io 00:20:44.919 ************************************ 00:20:44.919 00:20:44.919 real 0m9.465s 00:20:44.919 user 0m17.055s 00:20:44.919 sys 0m0.365s 00:20:44.919 18:48:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:44.919 18:48:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:44.919 18:48:13 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:44.919 18:48:13 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:44.919 18:48:13 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:44.919 18:48:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:44.919 ************************************ 00:20:44.919 START TEST bdev_write_zeroes 00:20:44.919 ************************************ 00:20:44.919 18:48:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:44.919 [2024-10-08 18:48:13.270272] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:44.919 [2024-10-08 18:48:13.270440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62676 ] 00:20:44.919 [2024-10-08 18:48:13.451016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.178 [2024-10-08 18:48:13.761111] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.747 Running I/O for 1 seconds... 00:20:47.121 47232.00 IOPS, 184.50 MiB/s 00:20:47.121 Latency(us) 00:20:47.121 [2024-10-08T18:48:15.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.121 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:47.121 Nvme0n1 : 1.03 7822.15 30.56 0.00 0.00 16320.97 11734.06 29085.50 00:20:47.121 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:47.121 Nvme1n1 : 1.03 7810.21 30.51 0.00 0.00 16322.75 12046.14 28461.35 00:20:47.121 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:47.121 Nvme2n1 : 1.03 7798.48 30.46 0.00 0.00 16290.08 11921.31 27337.87 00:20:47.121 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:47.121 Nvme2n2 : 1.04 7786.51 30.42 0.00 0.00 16208.67 11359.57 26588.89 00:20:47.121 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:47.121 Nvme2n3 : 1.04 7774.83 30.37 0.00 0.00 16186.09 9924.02 27337.87 00:20:47.121 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:47.121 Nvme3n1 : 1.04 7762.95 30.32 0.00 0.00 16159.60 7989.15 29584.82 00:20:47.121 [2024-10-08T18:48:15.878Z] =================================================================================================================== 00:20:47.121 [2024-10-08T18:48:15.878Z] Total : 46755.13 182.64 0.00 0.00 16248.03 7989.15 29584.82 00:20:48.497 00:20:48.497 real 0m4.014s 00:20:48.497 user 0m3.565s 00:20:48.497 sys 0m0.322s 00:20:48.497 18:48:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:48.497 18:48:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:48.497 ************************************ 00:20:48.497 END TEST bdev_write_zeroes 00:20:48.497 ************************************ 00:20:48.497 18:48:17 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:48.497 18:48:17 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:48.497 18:48:17 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:48.497 18:48:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:48.497 ************************************ 00:20:48.497 START TEST bdev_json_nonenclosed 00:20:48.497 ************************************ 00:20:48.497 18:48:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:48.755 [2024-10-08 18:48:17.324783] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:48.755 [2024-10-08 18:48:17.324928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62740 ] 00:20:48.755 [2024-10-08 18:48:17.497239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.368 [2024-10-08 18:48:17.832303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.368 [2024-10-08 18:48:17.832424] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:49.368 [2024-10-08 18:48:17.832459] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:49.368 [2024-10-08 18:48:17.832477] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:49.627 00:20:49.627 real 0m1.127s 00:20:49.627 user 0m0.860s 00:20:49.627 sys 0m0.159s 00:20:49.627 18:48:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:49.627 18:48:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:49.627 ************************************ 00:20:49.627 END TEST bdev_json_nonenclosed 00:20:49.627 ************************************ 00:20:49.885 18:48:18 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:49.885 18:48:18 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:49.885 18:48:18 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:49.885 18:48:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:49.885 ************************************ 00:20:49.885 START TEST bdev_json_nonarray 00:20:49.885 ************************************ 00:20:49.886 18:48:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:49.886 [2024-10-08 18:48:18.509602] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:49.886 [2024-10-08 18:48:18.510012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62771 ] 00:20:50.144 [2024-10-08 18:48:18.682389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.404 [2024-10-08 18:48:19.008159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.404 [2024-10-08 18:48:19.008561] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:50.404 [2024-10-08 18:48:19.008608] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:50.404 [2024-10-08 18:48:19.008629] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:50.971 00:20:50.971 real 0m1.086s 00:20:50.971 user 0m0.814s 00:20:50.971 sys 0m0.164s 00:20:50.971 18:48:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.971 ************************************ 00:20:50.971 END TEST bdev_json_nonarray 00:20:50.971 ************************************ 00:20:50.971 18:48:19 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:20:50.971 18:48:19 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:20:50.971 00:20:50.971 real 0m50.151s 00:20:50.971 user 1m12.891s 00:20:50.971 sys 0m8.964s 00:20:50.971 ************************************ 00:20:50.971 END TEST blockdev_nvme 00:20:50.971 ************************************ 00:20:50.971 18:48:19 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.971 18:48:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:20:50.971 18:48:19 -- spdk/autotest.sh@209 -- # uname -s 00:20:50.971 18:48:19 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:20:50.971 18:48:19 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:20:50.971 18:48:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:50.971 18:48:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:50.971 18:48:19 -- common/autotest_common.sh@10 -- # set +x 00:20:50.971 ************************************ 00:20:50.971 START TEST blockdev_nvme_gpt 00:20:50.971 ************************************ 00:20:50.971 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:20:50.971 * Looking for test storage... 00:20:50.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:50.971 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:50.971 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:20:50.971 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:51.230 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.230 18:48:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:20:51.230 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.230 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:51.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.230 --rc genhtml_branch_coverage=1 00:20:51.230 --rc genhtml_function_coverage=1 00:20:51.230 --rc genhtml_legend=1 00:20:51.230 --rc geninfo_all_blocks=1 00:20:51.230 --rc geninfo_unexecuted_blocks=1 00:20:51.230 00:20:51.230 ' 00:20:51.230 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:51.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.230 --rc genhtml_branch_coverage=1 00:20:51.230 --rc genhtml_function_coverage=1 00:20:51.230 --rc genhtml_legend=1 00:20:51.230 --rc geninfo_all_blocks=1 00:20:51.230 --rc geninfo_unexecuted_blocks=1 00:20:51.230 00:20:51.230 ' 00:20:51.230 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:51.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.230 --rc genhtml_branch_coverage=1 00:20:51.230 --rc genhtml_function_coverage=1 00:20:51.230 --rc genhtml_legend=1 00:20:51.230 --rc geninfo_all_blocks=1 00:20:51.230 --rc geninfo_unexecuted_blocks=1 00:20:51.230 00:20:51.230 ' 00:20:51.230 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:51.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.230 --rc genhtml_branch_coverage=1 00:20:51.230 --rc genhtml_function_coverage=1 00:20:51.230 --rc genhtml_legend=1 00:20:51.230 --rc geninfo_all_blocks=1 00:20:51.230 --rc geninfo_unexecuted_blocks=1 00:20:51.230 00:20:51.230 ' 00:20:51.230 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:51.230 18:48:19 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:20:51.230 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:51.230 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62860 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62860 00:20:51.231 18:48:19 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:51.231 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 62860 ']' 00:20:51.231 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.231 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.231 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.231 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.231 18:48:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:51.489 [2024-10-08 18:48:20.005400] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:20:51.489 [2024-10-08 18:48:20.005664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62860 ] 00:20:51.489 [2024-10-08 18:48:20.205706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.054 [2024-10-08 18:48:20.568567] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.020 18:48:21 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.020 18:48:21 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:20:53.020 18:48:21 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:53.020 18:48:21 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:20:53.020 18:48:21 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:53.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.586 Waiting for block devices as requested 00:20:53.843 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:53.843 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:53.843 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:54.101 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:59.366 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:20:59.366 BYT; 00:20:59.366 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:20:59.366 BYT; 00:20:59.366 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:59.366 18:48:27 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:20:59.366 18:48:27 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:21:00.302 The operation has completed successfully. 00:21:00.302 18:48:28 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:21:01.680 The operation has completed successfully. 00:21:01.680 18:48:29 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:01.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.879 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:02.880 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:02.880 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:02.880 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:02.880 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:21:02.880 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.880 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:02.880 [] 00:21:02.880 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.880 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:21:02.880 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:21:02.880 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:21:02.880 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:02.880 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:21:02.880 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.880 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.446 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.446 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:21:03.446 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.446 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.446 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:03.446 18:48:31 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.446 18:48:31 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:21:03.447 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.447 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:03.447 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8008a49f-2933-4b31-a394-376555504b41"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8008a49f-2933-4b31-a394-376555504b41",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "31d01e65-6b64-45ca-9bdd-ed4ed339509f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31d01e65-6b64-45ca-9bdd-ed4ed339509f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3daf82d5-091f-4dbb-8d82-19c127ed95a7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3daf82d5-091f-4dbb-8d82-19c127ed95a7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f2be5f8e-320b-46f0-aedd-2336a18d643e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f2be5f8e-320b-46f0-aedd-2336a18d643e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "656b799c-0b73-480c-9f25-3c6231031221"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "656b799c-0b73-480c-9f25-3c6231031221",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:21:03.447 18:48:32 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62860 00:21:03.447 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 62860 ']' 00:21:03.447 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 62860 00:21:03.447 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:21:03.447 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.447 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62860 00:21:03.705 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:03.705 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:03.705 killing process with pid 62860 00:21:03.705 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62860' 00:21:03.705 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 62860 00:21:03.705 18:48:32 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 62860 00:21:07.017 18:48:35 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:07.017 18:48:35 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:21:07.017 18:48:35 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:21:07.017 18:48:35 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:07.017 18:48:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:07.017 ************************************ 00:21:07.017 START TEST bdev_hello_world 00:21:07.017 ************************************ 00:21:07.017 18:48:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:21:07.017 [2024-10-08 18:48:35.185795] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:07.017 [2024-10-08 18:48:35.186036] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:21:07.017 [2024-10-08 18:48:35.353413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.017 [2024-10-08 18:48:35.592715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.583 [2024-10-08 18:48:36.293716] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:07.583 [2024-10-08 18:48:36.293791] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:21:07.583 [2024-10-08 18:48:36.293829] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:07.583 [2024-10-08 18:48:36.297460] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:07.583 [2024-10-08 18:48:36.298153] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:07.583 [2024-10-08 18:48:36.298194] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:07.583 [2024-10-08 18:48:36.298465] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:07.583 00:21:07.583 [2024-10-08 18:48:36.298503] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:09.483 00:21:09.483 real 0m2.697s 00:21:09.483 user 0m2.306s 00:21:09.483 sys 0m0.279s 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:09.483 ************************************ 00:21:09.483 END TEST bdev_hello_world 00:21:09.483 ************************************ 00:21:09.483 18:48:37 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:21:09.483 18:48:37 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:09.483 18:48:37 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:09.483 18:48:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:09.483 ************************************ 00:21:09.483 START TEST bdev_bounds 00:21:09.483 ************************************ 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63566 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:09.483 Process bdevio pid: 63566 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63566' 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63566 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 63566 ']' 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:09.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:09.483 18:48:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:09.483 [2024-10-08 18:48:37.965541] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:09.483 [2024-10-08 18:48:37.965825] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63566 ] 00:21:09.483 [2024-10-08 18:48:38.161415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.742 [2024-10-08 18:48:38.481338] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.742 [2024-10-08 18:48:38.481381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.742 [2024-10-08 18:48:38.481399] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.680 18:48:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:10.680 18:48:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:21:10.680 18:48:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:10.680 I/O targets: 00:21:10.680 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:21:10.680 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:21:10.680 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:21:10.680 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:10.680 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:10.680 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:21:10.680 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:21:10.680 00:21:10.680 00:21:10.680 CUnit - A unit testing framework for C - Version 2.1-3 00:21:10.680 http://cunit.sourceforge.net/ 00:21:10.680 00:21:10.680 00:21:10.680 Suite: bdevio tests on: Nvme3n1 00:21:10.680 Test: blockdev write read block ...passed 00:21:10.680 Test: blockdev write zeroes read block ...passed 00:21:10.939 Test: blockdev write zeroes read no split ...passed 00:21:10.940 Test: blockdev write zeroes read split ...passed 00:21:10.940 Test: blockdev write zeroes read split partial ...passed 00:21:10.940 Test: blockdev reset ...[2024-10-08 18:48:39.509187] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:21:10.940 [2024-10-08 18:48:39.513906] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:10.940 passed 00:21:10.940 Test: blockdev write read 8 blocks ...passed 00:21:10.940 Test: blockdev write read size > 128k ...passed 00:21:10.940 Test: blockdev write read invalid size ...passed 00:21:10.940 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.940 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.940 Test: blockdev write read max offset ...passed 00:21:10.940 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.940 Test: blockdev writev readv 8 blocks ...passed 00:21:10.940 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.940 Test: blockdev writev readv block ...passed 00:21:10.940 Test: blockdev writev readv size > 128k ...passed 00:21:10.940 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.940 Test: blockdev comparev and writev ...[2024-10-08 18:48:39.537462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ac206000 len:0x1000 00:21:10.940 [2024-10-08 18:48:39.537552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:10.940 passed 00:21:10.940 Test: blockdev nvme passthru rw ...passed 00:21:10.940 Test: blockdev nvme passthru vendor specific ...passed 00:21:10.940 Test: blockdev nvme admin passthru ...[2024-10-08 18:48:39.538316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:10.940 [2024-10-08 18:48:39.538361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:10.940 passed 00:21:10.940 Test: blockdev copy ...passed 00:21:10.940 Suite: bdevio tests on: Nvme2n3 00:21:10.940 Test: blockdev write read block ...passed 00:21:10.940 Test: blockdev write zeroes read block ...passed 00:21:10.940 Test: blockdev write zeroes read no split ...passed 00:21:10.940 Test: blockdev write zeroes read split ...passed 00:21:10.940 Test: blockdev write zeroes read split partial ...passed 00:21:10.940 Test: blockdev reset ...[2024-10-08 18:48:39.665841] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:21:10.940 [2024-10-08 18:48:39.671030] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:10.940 passed 00:21:10.940 Test: blockdev write read 8 blocks ...passed 00:21:10.940 Test: blockdev write read size > 128k ...passed 00:21:10.940 Test: blockdev write read invalid size ...passed 00:21:10.940 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.940 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.940 Test: blockdev write read max offset ...passed 00:21:10.940 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.940 Test: blockdev writev readv 8 blocks ...passed 00:21:10.940 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.940 Test: blockdev writev readv block ...passed 00:21:10.940 Test: blockdev writev readv size > 128k ...passed 00:21:10.940 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.940 Test: blockdev comparev and writev ...[2024-10-08 18:48:39.681541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc43c000 len:0x1000 00:21:10.940 [2024-10-08 18:48:39.681621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:10.940 passed 00:21:10.940 Test: blockdev nvme passthru rw ...passed 00:21:10.940 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:48:39.682396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:10.940 [2024-10-08 18:48:39.682439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:10.940 passed 00:21:10.940 Test: blockdev nvme admin passthru ...passed 00:21:11.199 Test: blockdev copy ...passed 00:21:11.199 Suite: bdevio tests on: Nvme2n2 00:21:11.199 Test: blockdev write read block ...passed 00:21:11.199 Test: blockdev write zeroes read block ...passed 00:21:11.199 Test: blockdev write zeroes read no split ...passed 00:21:11.199 Test: blockdev write zeroes read split ...passed 00:21:11.199 Test: blockdev write zeroes read split partial ...passed 00:21:11.199 Test: blockdev reset ...[2024-10-08 18:48:39.856236] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:21:11.199 [2024-10-08 18:48:39.861604] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:11.199 passed 00:21:11.199 Test: blockdev write read 8 blocks ...passed 00:21:11.199 Test: blockdev write read size > 128k ...passed 00:21:11.199 Test: blockdev write read invalid size ...passed 00:21:11.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.199 Test: blockdev write read max offset ...passed 00:21:11.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.199 Test: blockdev writev readv 8 blocks ...passed 00:21:11.199 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.199 Test: blockdev writev readv block ...passed 00:21:11.199 Test: blockdev writev readv size > 128k ...passed 00:21:11.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.199 Test: blockdev comparev and writev ...[2024-10-08 18:48:39.869351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc436000 len:0x1000 00:21:11.199 [2024-10-08 18:48:39.869430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:11.199 passed 00:21:11.199 Test: blockdev nvme passthru rw ...passed 00:21:11.199 Test: blockdev nvme passthru vendor specific ...passed 00:21:11.199 Test: blockdev nvme admin passthru ...[2024-10-08 18:48:39.870288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:11.199 [2024-10-08 18:48:39.870332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:11.199 passed 00:21:11.199 Test: blockdev copy ...passed 00:21:11.199 Suite: bdevio tests on: Nvme2n1 00:21:11.199 Test: blockdev write read block ...passed 00:21:11.199 Test: blockdev write zeroes read block ...passed 00:21:11.200 Test: blockdev write zeroes read no split ...passed 00:21:11.458 Test: blockdev write zeroes read split ...passed 00:21:11.458 Test: blockdev write zeroes read split partial ...passed 00:21:11.458 Test: blockdev reset ...[2024-10-08 18:48:40.005031] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:21:11.458 [2024-10-08 18:48:40.010626] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:11.458 passed 00:21:11.458 Test: blockdev write read 8 blocks ...passed 00:21:11.458 Test: blockdev write read size > 128k ...passed 00:21:11.458 Test: blockdev write read invalid size ...passed 00:21:11.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.458 Test: blockdev write read max offset ...passed 00:21:11.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.458 Test: blockdev writev readv 8 blocks ...passed 00:21:11.458 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.458 Test: blockdev writev readv block ...passed 00:21:11.458 Test: blockdev writev readv size > 128k ...passed 00:21:11.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.459 Test: blockdev comparev and writev ...[2024-10-08 18:48:40.022182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc432000 len:0x1000 00:21:11.459 [2024-10-08 18:48:40.022270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:11.459 passed 00:21:11.459 Test: blockdev nvme passthru rw ...passed 00:21:11.459 Test: blockdev nvme passthru vendor specific ...passed 00:21:11.459 Test: blockdev nvme admin passthru ...[2024-10-08 18:48:40.023421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:21:11.459 [2024-10-08 18:48:40.023475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:21:11.459 passed 00:21:11.459 Test: blockdev copy ...passed 00:21:11.459 Suite: bdevio tests on: Nvme1n1p2 00:21:11.459 Test: blockdev write read block ...passed 00:21:11.459 Test: blockdev write zeroes read block ...passed 00:21:11.459 Test: blockdev write zeroes read no split ...passed 00:21:11.459 Test: blockdev write zeroes read split ...passed 00:21:11.459 Test: blockdev write zeroes read split partial ...passed 00:21:11.459 Test: blockdev reset ...[2024-10-08 18:48:40.131017] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:21:11.459 [2024-10-08 18:48:40.135681] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:11.459 passed 00:21:11.459 Test: blockdev write read 8 blocks ...passed 00:21:11.459 Test: blockdev write read size > 128k ...passed 00:21:11.459 Test: blockdev write read invalid size ...passed 00:21:11.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.459 Test: blockdev write read max offset ...passed 00:21:11.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.459 Test: blockdev writev readv 8 blocks ...passed 00:21:11.459 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.459 Test: blockdev writev readv block ...passed 00:21:11.459 Test: blockdev writev readv size > 128k ...passed 00:21:11.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.459 Test: blockdev comparev and writev ...[2024-10-08 18:48:40.145346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2bc42e000 len:0x1000 00:21:11.459 [2024-10-08 18:48:40.145452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:11.459 passed 00:21:11.459 Test: blockdev nvme passthru rw ...passed 00:21:11.459 Test: blockdev nvme passthru vendor specific ...passed 00:21:11.459 Test: blockdev nvme admin passthru ...passed 00:21:11.459 Test: blockdev copy ...passed 00:21:11.459 Suite: bdevio tests on: Nvme1n1p1 00:21:11.459 Test: blockdev write read block ...passed 00:21:11.459 Test: blockdev write zeroes read block ...passed 00:21:11.459 Test: blockdev write zeroes read no split ...passed 00:21:11.459 Test: blockdev write zeroes read split ...passed 00:21:11.717 Test: blockdev write zeroes read split partial ...passed 00:21:11.717 Test: blockdev reset ...[2024-10-08 18:48:40.246837] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:21:11.717 [2024-10-08 18:48:40.251618] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:11.717 passed 00:21:11.717 Test: blockdev write read 8 blocks ...passed 00:21:11.717 Test: blockdev write read size > 128k ...passed 00:21:11.717 Test: blockdev write read invalid size ...passed 00:21:11.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.717 Test: blockdev write read max offset ...passed 00:21:11.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.718 Test: blockdev writev readv 8 blocks ...passed 00:21:11.718 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.718 Test: blockdev writev readv block ...passed 00:21:11.718 Test: blockdev writev readv size > 128k ...passed 00:21:11.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.718 Test: blockdev comparev and writev ...[2024-10-08 18:48:40.260873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b1c0e000 len:0x1000 00:21:11.718 [2024-10-08 18:48:40.260948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:21:11.718 passed 00:21:11.718 Test: blockdev nvme passthru rw ...passed 00:21:11.718 Test: blockdev nvme passthru vendor specific ...passed 00:21:11.718 Test: blockdev nvme admin passthru ...passed 00:21:11.718 Test: blockdev copy ...passed 00:21:11.718 Suite: bdevio tests on: Nvme0n1 00:21:11.718 Test: blockdev write read block ...passed 00:21:11.718 Test: blockdev write zeroes read block ...passed 00:21:11.718 Test: blockdev write zeroes read no split ...passed 00:21:11.718 Test: blockdev write zeroes read split ...passed 00:21:11.718 Test: blockdev write zeroes read split partial ...passed 00:21:11.718 Test: blockdev reset ...[2024-10-08 18:48:40.365350] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:21:11.718 [2024-10-08 18:48:40.370129] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:11.718 passed 00:21:11.718 Test: blockdev write read 8 blocks ...passed 00:21:11.718 Test: blockdev write read size > 128k ...passed 00:21:11.718 Test: blockdev write read invalid size ...passed 00:21:11.718 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.718 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.718 Test: blockdev write read max offset ...passed 00:21:11.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.718 Test: blockdev writev readv 8 blocks ...passed 00:21:11.718 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.718 Test: blockdev writev readv block ...passed 00:21:11.718 Test: blockdev writev readv size > 128k ...passed 00:21:11.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:11.718 Test: blockdev comparev and writev ...[2024-10-08 18:48:40.379284] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:21:11.718 separate metadata which is not supported yet. 00:21:11.718 passed 00:21:11.718 Test: blockdev nvme passthru rw ...passed 00:21:11.718 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:48:40.379929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:21:11.718 [2024-10-08 18:48:40.380002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:21:11.718 passed 00:21:11.718 Test: blockdev nvme admin passthru ...passed 00:21:11.718 Test: blockdev copy ...passed 00:21:11.718 00:21:11.718 Run Summary: Type Total Ran Passed Failed Inactive 00:21:11.718 suites 7 7 n/a 0 0 00:21:11.718 tests 161 161 161 0 0 00:21:11.718 asserts 1025 1025 1025 0 n/a 00:21:11.718 00:21:11.718 Elapsed time = 2.619 seconds 00:21:11.718 0 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63566 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 63566 ']' 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 63566 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63566 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63566' 00:21:11.718 killing process with pid 63566 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 63566 00:21:11.718 18:48:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 63566 00:21:13.092 18:48:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:13.092 00:21:13.092 real 0m3.934s 00:21:13.092 user 0m9.702s 00:21:13.092 sys 0m0.516s 00:21:13.092 ************************************ 00:21:13.092 END TEST bdev_bounds 00:21:13.092 ************************************ 00:21:13.092 18:48:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.092 18:48:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:13.092 18:48:41 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:21:13.092 18:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:13.092 18:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.092 18:48:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:13.092 ************************************ 00:21:13.092 START TEST bdev_nbd 00:21:13.092 ************************************ 00:21:13.092 18:48:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:21:13.092 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:13.092 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:13.092 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63642 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63642 /var/tmp/spdk-nbd.sock 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 63642 ']' 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.093 18:48:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:13.351 [2024-10-08 18:48:41.947690] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:13.351 [2024-10-08 18:48:41.947846] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.609 [2024-10-08 18:48:42.128438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.867 [2024-10-08 18:48:42.470827] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:14.800 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.058 1+0 records in 00:21:15.058 1+0 records out 00:21:15.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452323 s, 9.1 MB/s 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:15.058 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.316 1+0 records in 00:21:15.316 1+0 records out 00:21:15.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567381 s, 7.2 MB/s 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:15.316 18:48:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.574 1+0 records in 00:21:15.574 1+0 records out 00:21:15.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621324 s, 6.6 MB/s 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:15.574 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:16.140 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:16.141 1+0 records in 00:21:16.141 1+0 records out 00:21:16.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065674 s, 6.2 MB/s 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:16.141 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:16.398 1+0 records in 00:21:16.398 1+0 records out 00:21:16.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068411 s, 6.0 MB/s 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:16.398 18:48:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:16.657 1+0 records in 00:21:16.657 1+0 records out 00:21:16.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743037 s, 5.5 MB/s 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:16.657 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:17.224 1+0 records in 00:21:17.224 1+0 records out 00:21:17.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650481 s, 6.3 MB/s 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:21:17.224 18:48:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd0", 00:21:17.482 "bdev_name": "Nvme0n1" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd1", 00:21:17.482 "bdev_name": "Nvme1n1p1" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd2", 00:21:17.482 "bdev_name": "Nvme1n1p2" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd3", 00:21:17.482 "bdev_name": "Nvme2n1" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd4", 00:21:17.482 "bdev_name": "Nvme2n2" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd5", 00:21:17.482 "bdev_name": "Nvme2n3" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd6", 00:21:17.482 "bdev_name": "Nvme3n1" 00:21:17.482 } 00:21:17.482 ]' 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd0", 00:21:17.482 "bdev_name": "Nvme0n1" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd1", 00:21:17.482 "bdev_name": "Nvme1n1p1" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd2", 00:21:17.482 "bdev_name": "Nvme1n1p2" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd3", 00:21:17.482 "bdev_name": "Nvme2n1" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd4", 00:21:17.482 "bdev_name": "Nvme2n2" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd5", 00:21:17.482 "bdev_name": "Nvme2n3" 00:21:17.482 }, 00:21:17.482 { 00:21:17.482 "nbd_device": "/dev/nbd6", 00:21:17.482 "bdev_name": "Nvme3n1" 00:21:17.482 } 00:21:17.482 ]' 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:17.482 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:18.133 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:18.398 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:18.398 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:18.399 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:18.399 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.399 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.399 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:18.399 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:18.399 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.399 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:18.399 18:48:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:18.657 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:18.915 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:19.173 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:19.432 18:48:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:19.698 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:19.957 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:19.958 18:48:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:21:20.525 /dev/nbd0 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.525 1+0 records in 00:21:20.525 1+0 records out 00:21:20.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618858 s, 6.6 MB/s 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:20.525 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:21:20.783 /dev/nbd1 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.783 1+0 records in 00:21:20.783 1+0 records out 00:21:20.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545686 s, 7.5 MB/s 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:20.783 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:21:21.120 /dev/nbd10 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.120 1+0 records in 00:21:21.120 1+0 records out 00:21:21.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726057 s, 5.6 MB/s 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:21.120 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:21:21.379 /dev/nbd11 00:21:21.379 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:21:21.379 18:48:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:21:21.379 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:21:21.379 18:48:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.379 1+0 records in 00:21:21.379 1+0 records out 00:21:21.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558156 s, 7.3 MB/s 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:21.379 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:21:21.638 /dev/nbd12 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.638 1+0 records in 00:21:21.638 1+0 records out 00:21:21.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00179575 s, 2.3 MB/s 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:21.638 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:21:21.896 /dev/nbd13 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.896 1+0 records in 00:21:21.896 1+0 records out 00:21:21.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743993 s, 5.5 MB/s 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:21.896 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:21:22.153 /dev/nbd14 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:22.153 1+0 records in 00:21:22.153 1+0 records out 00:21:22.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000875613 s, 4.7 MB/s 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:22.153 18:48:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd0", 00:21:22.411 "bdev_name": "Nvme0n1" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd1", 00:21:22.411 "bdev_name": "Nvme1n1p1" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd10", 00:21:22.411 "bdev_name": "Nvme1n1p2" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd11", 00:21:22.411 "bdev_name": "Nvme2n1" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd12", 00:21:22.411 "bdev_name": "Nvme2n2" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd13", 00:21:22.411 "bdev_name": "Nvme2n3" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd14", 00:21:22.411 "bdev_name": "Nvme3n1" 00:21:22.411 } 00:21:22.411 ]' 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd0", 00:21:22.411 "bdev_name": "Nvme0n1" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd1", 00:21:22.411 "bdev_name": "Nvme1n1p1" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd10", 00:21:22.411 "bdev_name": "Nvme1n1p2" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd11", 00:21:22.411 "bdev_name": "Nvme2n1" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd12", 00:21:22.411 "bdev_name": "Nvme2n2" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd13", 00:21:22.411 "bdev_name": "Nvme2n3" 00:21:22.411 }, 00:21:22.411 { 00:21:22.411 "nbd_device": "/dev/nbd14", 00:21:22.411 "bdev_name": "Nvme3n1" 00:21:22.411 } 00:21:22.411 ]' 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:22.411 /dev/nbd1 00:21:22.411 /dev/nbd10 00:21:22.411 /dev/nbd11 00:21:22.411 /dev/nbd12 00:21:22.411 /dev/nbd13 00:21:22.411 /dev/nbd14' 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:22.411 /dev/nbd1 00:21:22.411 /dev/nbd10 00:21:22.411 /dev/nbd11 00:21:22.411 /dev/nbd12 00:21:22.411 /dev/nbd13 00:21:22.411 /dev/nbd14' 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:21:22.411 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:22.412 256+0 records in 00:21:22.412 256+0 records out 00:21:22.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012959 s, 80.9 MB/s 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:22.412 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:22.669 256+0 records in 00:21:22.669 256+0 records out 00:21:22.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141449 s, 7.4 MB/s 00:21:22.669 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:22.669 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:22.927 256+0 records in 00:21:22.927 256+0 records out 00:21:22.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143044 s, 7.3 MB/s 00:21:22.927 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:22.927 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:21:22.927 256+0 records in 00:21:22.927 256+0 records out 00:21:22.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144321 s, 7.3 MB/s 00:21:22.927 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:22.927 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:21:23.185 256+0 records in 00:21:23.185 256+0 records out 00:21:23.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141772 s, 7.4 MB/s 00:21:23.185 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:23.185 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:21:23.185 256+0 records in 00:21:23.185 256+0 records out 00:21:23.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141172 s, 7.4 MB/s 00:21:23.185 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:23.185 18:48:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:21:23.442 256+0 records in 00:21:23.442 256+0 records out 00:21:23.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146266 s, 7.2 MB/s 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:21:23.442 256+0 records in 00:21:23.442 256+0 records out 00:21:23.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146607 s, 7.2 MB/s 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:23.442 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.008 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.265 18:48:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.523 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.780 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.038 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.296 18:48:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.561 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:26.127 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:26.384 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:26.384 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:26.384 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:26.384 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:26.384 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:26.384 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:26.384 18:48:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:26.641 malloc_lvol_verify 00:21:26.642 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:26.898 a40bffe0-140f-4419-bc03-3031d0b9a24f 00:21:26.898 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:27.156 369b095c-5394-47b8-820f-3a6d0bf0e1d7 00:21:27.156 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:27.413 /dev/nbd0 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:27.413 mke2fs 1.47.0 (5-Feb-2023) 00:21:27.413 Discarding device blocks: 0/4096 done 00:21:27.413 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:27.413 00:21:27.413 Allocating group tables: 0/1 done 00:21:27.413 Writing inode tables: 0/1 done 00:21:27.413 Creating journal (1024 blocks): done 00:21:27.413 Writing superblocks and filesystem accounting information: 0/1 done 00:21:27.413 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:27.413 18:48:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:27.670 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:27.670 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:27.670 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:27.670 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:27.670 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63642 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 63642 ']' 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 63642 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63642 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:27.671 killing process with pid 63642 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63642' 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 63642 00:21:27.671 18:48:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 63642 00:21:29.570 18:48:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:29.570 00:21:29.570 real 0m15.966s 00:21:29.570 user 0m20.938s 00:21:29.570 sys 0m6.477s 00:21:29.570 18:48:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:29.570 18:48:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:29.570 ************************************ 00:21:29.570 END TEST bdev_nbd 00:21:29.570 ************************************ 00:21:29.570 18:48:57 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:21:29.570 18:48:57 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:21:29.570 18:48:57 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:21:29.570 skipping fio tests on NVMe due to multi-ns failures. 00:21:29.570 18:48:57 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:21:29.570 18:48:57 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:29.570 18:48:57 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:29.570 18:48:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:21:29.570 18:48:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:29.570 18:48:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:29.570 ************************************ 00:21:29.570 START TEST bdev_verify 00:21:29.570 ************************************ 00:21:29.571 18:48:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:29.571 [2024-10-08 18:48:57.949210] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:29.571 [2024-10-08 18:48:57.949339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64112 ] 00:21:29.571 [2024-10-08 18:48:58.117359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:29.830 [2024-10-08 18:48:58.399027] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.830 [2024-10-08 18:48:58.399036] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.766 Running I/O for 5 seconds... 00:21:33.075 18624.00 IOPS, 72.75 MiB/s [2024-10-08T18:49:02.768Z] 17728.00 IOPS, 69.25 MiB/s [2024-10-08T18:49:03.704Z] 18176.00 IOPS, 71.00 MiB/s [2024-10-08T18:49:04.640Z] 18144.00 IOPS, 70.88 MiB/s [2024-10-08T18:49:04.640Z] 18009.60 IOPS, 70.35 MiB/s 00:21:35.883 Latency(us) 00:21:35.883 [2024-10-08T18:49:04.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.883 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x0 length 0xbd0bd 00:21:35.883 Nvme0n1 : 5.06 1315.02 5.14 0.00 0.00 96984.00 20846.69 102360.99 00:21:35.883 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:21:35.883 Nvme0n1 : 5.07 1210.98 4.73 0.00 0.00 105369.44 20347.37 96868.45 00:21:35.883 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x0 length 0x4ff80 00:21:35.883 Nvme1n1p1 : 5.06 1314.54 5.13 0.00 0.00 96795.97 23468.13 99365.06 00:21:35.883 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x4ff80 length 0x4ff80 00:21:35.883 Nvme1n1p1 : 5.08 1210.51 4.73 0.00 0.00 105232.84 23218.47 96369.13 00:21:35.883 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x0 length 0x4ff7f 00:21:35.883 Nvme1n1p2 : 5.06 1314.12 5.13 0.00 0.00 96599.09 25839.91 96868.45 00:21:35.883 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:21:35.883 Nvme1n1p2 : 5.08 1209.97 4.73 0.00 0.00 104947.68 24466.77 92873.87 00:21:35.883 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x0 length 0x80000 00:21:35.883 Nvme2n1 : 5.08 1322.24 5.17 0.00 0.00 95839.64 4369.07 91375.91 00:21:35.883 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x80000 length 0x80000 00:21:35.883 Nvme2n1 : 5.08 1209.49 4.72 0.00 0.00 104744.25 23343.30 87880.66 00:21:35.883 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x0 length 0x80000 00:21:35.883 Nvme2n2 : 5.09 1321.39 5.16 0.00 0.00 95670.41 6179.11 93872.52 00:21:35.883 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x80000 length 0x80000 00:21:35.883 Nvme2n2 : 5.08 1208.99 4.72 0.00 0.00 104547.34 22094.99 91875.23 00:21:35.883 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x0 length 0x80000 00:21:35.883 Nvme2n3 : 5.10 1330.43 5.20 0.00 0.00 94931.21 10673.01 99864.38 00:21:35.883 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x80000 length 0x80000 00:21:35.883 Nvme2n3 : 5.10 1217.42 4.76 0.00 0.00 103663.23 4150.61 94371.84 00:21:35.883 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x0 length 0x20000 00:21:35.883 Nvme3n1 : 5.10 1329.76 5.19 0.00 0.00 94753.88 11983.73 102860.31 00:21:35.883 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:35.883 Verification LBA range: start 0x20000 length 0x20000 00:21:35.883 Nvme3n1 : 5.12 1225.09 4.79 0.00 0.00 102882.24 12483.05 96868.45 00:21:35.883 [2024-10-08T18:49:04.640Z] =================================================================================================================== 00:21:35.883 [2024-10-08T18:49:04.640Z] Total : 17739.95 69.30 0.00 0.00 100026.39 4150.61 102860.31 00:21:37.828 00:21:37.828 real 0m8.275s 00:21:37.828 user 0m14.953s 00:21:37.828 sys 0m0.309s 00:21:37.828 18:49:06 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:37.828 18:49:06 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:37.828 ************************************ 00:21:37.828 END TEST bdev_verify 00:21:37.828 ************************************ 00:21:37.828 18:49:06 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:37.828 18:49:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:21:37.828 18:49:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:37.828 18:49:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:37.828 ************************************ 00:21:37.828 START TEST bdev_verify_big_io 00:21:37.828 ************************************ 00:21:37.828 18:49:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:37.828 [2024-10-08 18:49:06.289313] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:37.828 [2024-10-08 18:49:06.289458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64220 ] 00:21:37.829 [2024-10-08 18:49:06.462233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:38.087 [2024-10-08 18:49:06.757485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.087 [2024-10-08 18:49:06.757493] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.023 Running I/O for 5 seconds... 00:21:45.675 1786.00 IOPS, 111.62 MiB/s [2024-10-08T18:49:14.432Z] 3134.50 IOPS, 195.91 MiB/s 00:21:45.675 Latency(us) 00:21:45.675 [2024-10-08T18:49:14.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.675 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:45.675 Verification LBA range: start 0x0 length 0xbd0b 00:21:45.675 Nvme0n1 : 6.20 70.20 4.39 0.00 0.00 1690337.94 19972.88 1829515.46 00:21:45.675 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:45.675 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:45.675 Nvme0n1 : 6.27 77.79 4.86 0.00 0.00 1572768.71 16852.11 1693699.90 00:21:45.675 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:45.675 Verification LBA range: start 0x0 length 0x4ff8 00:21:45.675 Nvme1n1p1 : 6.20 72.75 4.55 0.00 0.00 1606226.12 110350.14 1733645.65 00:21:45.675 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:45.675 Verification LBA range: start 0x4ff8 length 0x4ff8 00:21:45.675 Nvme1n1p1 : 6.28 78.39 4.90 0.00 0.00 1523744.73 110849.46 1533916.89 00:21:45.675 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:45.675 Verification LBA range: start 0x0 length 0x4ff7 00:21:45.675 Nvme1n1p2 : 6.20 73.67 4.60 0.00 0.00 1509572.49 163777.58 1781580.56 00:21:45.675 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:45.675 Verification LBA range: start 0x4ff7 length 0x4ff7 00:21:45.675 Nvme1n1p2 : 6.28 63.04 3.94 0.00 0.00 1814075.57 192738.26 2652397.96 00:21:45.675 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:45.675 Verification LBA range: start 0x0 length 0x8000 00:21:45.675 Nvme2n1 : 6.31 73.17 4.57 0.00 0.00 1460591.69 106854.89 2684354.56 00:21:45.675 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:45.675 Verification LBA range: start 0x8000 length 0x8000 00:21:45.675 Nvme2n1 : 6.34 81.38 5.09 0.00 0.00 1357377.57 135815.56 1509949.44 00:21:45.675 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:45.676 Verification LBA range: start 0x0 length 0x8000 00:21:45.676 Nvme2n2 : 6.50 80.44 5.03 0.00 0.00 1274664.76 87381.33 2732289.46 00:21:45.676 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:45.676 Verification LBA range: start 0x8000 length 0x8000 00:21:45.676 Nvme2n2 : 6.29 81.45 5.09 0.00 0.00 1298257.43 112347.43 1430057.94 00:21:45.676 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:45.676 Verification LBA range: start 0x0 length 0x8000 00:21:45.676 Nvme2n3 : 6.53 90.25 5.64 0.00 0.00 1088869.75 21346.01 2780224.37 00:21:45.676 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:45.676 Verification LBA range: start 0x8000 length 0x8000 00:21:45.676 Nvme2n3 : 6.39 90.13 5.63 0.00 0.00 1126591.80 46936.26 1470003.69 00:21:45.676 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:45.676 Verification LBA range: start 0x0 length 0x2000 00:21:45.676 Nvme3n1 : 6.65 146.66 9.17 0.00 0.00 647113.06 936.23 2844137.57 00:21:45.676 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:45.676 Verification LBA range: start 0x2000 length 0x2000 00:21:45.676 Nvme3n1 : 6.50 108.38 6.77 0.00 0.00 898144.91 908.92 1509949.44 00:21:45.676 [2024-10-08T18:49:14.433Z] =================================================================================================================== 00:21:45.676 [2024-10-08T18:49:14.433Z] Total : 1187.70 74.23 0.00 0.00 1273125.02 908.92 2844137.57 00:21:48.277 00:21:48.277 real 0m10.300s 00:21:48.277 user 0m18.919s 00:21:48.277 sys 0m0.377s 00:21:48.277 18:49:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:48.277 18:49:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:48.277 ************************************ 00:21:48.277 END TEST bdev_verify_big_io 00:21:48.277 ************************************ 00:21:48.277 18:49:16 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:48.277 18:49:16 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:48.277 18:49:16 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:48.277 18:49:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:48.277 ************************************ 00:21:48.277 START TEST bdev_write_zeroes 00:21:48.277 ************************************ 00:21:48.277 18:49:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:48.277 [2024-10-08 18:49:16.652390] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:48.277 [2024-10-08 18:49:16.652533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64353 ] 00:21:48.277 [2024-10-08 18:49:16.821271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.535 [2024-10-08 18:49:17.062351] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.101 Running I/O for 1 seconds... 00:21:50.475 49664.00 IOPS, 194.00 MiB/s 00:21:50.475 Latency(us) 00:21:50.475 [2024-10-08T18:49:19.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.475 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:50.475 Nvme0n1 : 1.03 7058.08 27.57 0.00 0.00 18084.52 14105.84 32705.58 00:21:50.475 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:50.475 Nvme1n1p1 : 1.04 7046.59 27.53 0.00 0.00 18083.05 14730.00 32206.26 00:21:50.475 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:50.475 Nvme1n1p2 : 1.04 7034.97 27.48 0.00 0.00 17999.14 11546.82 31332.45 00:21:50.475 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:50.475 Nvme2n1 : 1.04 7024.43 27.44 0.00 0.00 17958.34 10673.01 30458.64 00:21:50.475 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:50.475 Nvme2n2 : 1.04 7013.83 27.40 0.00 0.00 17922.69 9175.04 29959.31 00:21:50.475 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:50.475 Nvme2n3 : 1.04 7003.28 27.36 0.00 0.00 17908.60 8862.96 30708.30 00:21:50.475 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:50.475 Nvme3n1 : 1.04 6931.62 27.08 0.00 0.00 18062.93 13793.77 32955.25 00:21:50.475 [2024-10-08T18:49:19.232Z] =================================================================================================================== 00:21:50.475 [2024-10-08T18:49:19.232Z] Total : 49112.80 191.85 0.00 0.00 18002.68 8862.96 32955.25 00:21:51.857 00:21:51.857 real 0m3.869s 00:21:51.857 user 0m3.440s 00:21:51.857 sys 0m0.302s 00:21:51.857 18:49:20 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.857 ************************************ 00:21:51.857 END TEST bdev_write_zeroes 00:21:51.857 ************************************ 00:21:51.857 18:49:20 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:51.857 18:49:20 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:51.857 18:49:20 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:51.857 18:49:20 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.857 18:49:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:51.857 ************************************ 00:21:51.857 START TEST bdev_json_nonenclosed 00:21:51.857 ************************************ 00:21:51.857 18:49:20 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:52.115 [2024-10-08 18:49:20.614649] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:52.115 [2024-10-08 18:49:20.614909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64410 ] 00:21:52.115 [2024-10-08 18:49:20.806703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.373 [2024-10-08 18:49:21.101798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.373 [2024-10-08 18:49:21.101922] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:52.373 [2024-10-08 18:49:21.101948] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:52.373 [2024-10-08 18:49:21.101961] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:52.938 00:21:52.938 real 0m1.055s 00:21:52.938 user 0m0.763s 00:21:52.938 sys 0m0.184s 00:21:52.938 18:49:21 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.938 18:49:21 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:52.938 ************************************ 00:21:52.938 END TEST bdev_json_nonenclosed 00:21:52.938 ************************************ 00:21:52.938 18:49:21 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:52.938 18:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:21:52.938 18:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:52.938 18:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:52.938 ************************************ 00:21:52.938 START TEST bdev_json_nonarray 00:21:52.938 ************************************ 00:21:52.938 18:49:21 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:53.196 [2024-10-08 18:49:21.697689] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:53.196 [2024-10-08 18:49:21.697835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64441 ] 00:21:53.196 [2024-10-08 18:49:21.862044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.454 [2024-10-08 18:49:22.093099] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.454 [2024-10-08 18:49:22.093217] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:53.454 [2024-10-08 18:49:22.093244] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:53.454 [2024-10-08 18:49:22.093257] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:54.021 00:21:54.021 real 0m0.945s 00:21:54.021 user 0m0.667s 00:21:54.021 sys 0m0.171s 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:54.021 ************************************ 00:21:54.021 END TEST bdev_json_nonarray 00:21:54.021 ************************************ 00:21:54.021 18:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:21:54.021 18:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:21:54.021 18:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:21:54.021 18:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:54.021 18:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.021 18:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:21:54.021 ************************************ 00:21:54.021 START TEST bdev_gpt_uuid 00:21:54.021 ************************************ 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64468 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 64468 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 64468 ']' 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.021 18:49:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:21:54.021 [2024-10-08 18:49:22.735194] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:21:54.021 [2024-10-08 18:49:22.735397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64468 ] 00:21:54.278 [2024-10-08 18:49:22.905882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.536 [2024-10-08 18:49:23.135216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.470 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.470 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:21:55.470 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:55.470 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.470 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:21:55.728 Some configs were skipped because the RPC state that can call them passed over. 00:21:55.728 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.728 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:21:55.728 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.728 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:21:55.728 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.728 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:21:55.728 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.728 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:21:55.987 { 00:21:55.987 "name": "Nvme1n1p1", 00:21:55.987 "aliases": [ 00:21:55.987 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:21:55.987 ], 00:21:55.987 "product_name": "GPT Disk", 00:21:55.987 "block_size": 4096, 00:21:55.987 "num_blocks": 655104, 00:21:55.987 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:21:55.987 "assigned_rate_limits": { 00:21:55.987 "rw_ios_per_sec": 0, 00:21:55.987 "rw_mbytes_per_sec": 0, 00:21:55.987 "r_mbytes_per_sec": 0, 00:21:55.987 "w_mbytes_per_sec": 0 00:21:55.987 }, 00:21:55.987 "claimed": false, 00:21:55.987 "zoned": false, 00:21:55.987 "supported_io_types": { 00:21:55.987 "read": true, 00:21:55.987 "write": true, 00:21:55.987 "unmap": true, 00:21:55.987 "flush": true, 00:21:55.987 "reset": true, 00:21:55.987 "nvme_admin": false, 00:21:55.987 "nvme_io": false, 00:21:55.987 "nvme_io_md": false, 00:21:55.987 "write_zeroes": true, 00:21:55.987 "zcopy": false, 00:21:55.987 "get_zone_info": false, 00:21:55.987 "zone_management": false, 00:21:55.987 "zone_append": false, 00:21:55.987 "compare": true, 00:21:55.987 "compare_and_write": false, 00:21:55.987 "abort": true, 00:21:55.987 "seek_hole": false, 00:21:55.987 "seek_data": false, 00:21:55.987 "copy": true, 00:21:55.987 "nvme_iov_md": false 00:21:55.987 }, 00:21:55.987 "driver_specific": { 00:21:55.987 "gpt": { 00:21:55.987 "base_bdev": "Nvme1n1", 00:21:55.987 "offset_blocks": 256, 00:21:55.987 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:21:55.987 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:21:55.987 "partition_name": "SPDK_TEST_first" 00:21:55.987 } 00:21:55.987 } 00:21:55.987 } 00:21:55.987 ]' 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.987 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:21:55.987 { 00:21:55.987 "name": "Nvme1n1p2", 00:21:55.987 "aliases": [ 00:21:55.987 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:21:55.987 ], 00:21:55.987 "product_name": "GPT Disk", 00:21:55.988 "block_size": 4096, 00:21:55.988 "num_blocks": 655103, 00:21:55.988 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:21:55.988 "assigned_rate_limits": { 00:21:55.988 "rw_ios_per_sec": 0, 00:21:55.988 "rw_mbytes_per_sec": 0, 00:21:55.988 "r_mbytes_per_sec": 0, 00:21:55.988 "w_mbytes_per_sec": 0 00:21:55.988 }, 00:21:55.988 "claimed": false, 00:21:55.988 "zoned": false, 00:21:55.988 "supported_io_types": { 00:21:55.988 "read": true, 00:21:55.988 "write": true, 00:21:55.988 "unmap": true, 00:21:55.988 "flush": true, 00:21:55.988 "reset": true, 00:21:55.988 "nvme_admin": false, 00:21:55.988 "nvme_io": false, 00:21:55.988 "nvme_io_md": false, 00:21:55.988 "write_zeroes": true, 00:21:55.988 "zcopy": false, 00:21:55.988 "get_zone_info": false, 00:21:55.988 "zone_management": false, 00:21:55.988 "zone_append": false, 00:21:55.988 "compare": true, 00:21:55.988 "compare_and_write": false, 00:21:55.988 "abort": true, 00:21:55.988 "seek_hole": false, 00:21:55.988 "seek_data": false, 00:21:55.988 "copy": true, 00:21:55.988 "nvme_iov_md": false 00:21:55.988 }, 00:21:55.988 "driver_specific": { 00:21:55.988 "gpt": { 00:21:55.988 "base_bdev": "Nvme1n1", 00:21:55.988 "offset_blocks": 655360, 00:21:55.988 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:21:55.988 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:21:55.988 "partition_name": "SPDK_TEST_second" 00:21:55.988 } 00:21:55.988 } 00:21:55.988 } 00:21:55.988 ]' 00:21:55.988 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:21:55.988 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:21:55.988 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 64468 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 64468 ']' 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 64468 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64468 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:56.247 killing process with pid 64468 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64468' 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 64468 00:21:56.247 18:49:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 64468 00:21:59.534 00:21:59.534 real 0m5.373s 00:21:59.534 user 0m5.612s 00:21:59.534 sys 0m0.617s 00:21:59.534 18:49:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.534 18:49:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:21:59.534 ************************************ 00:21:59.534 END TEST bdev_gpt_uuid 00:21:59.534 ************************************ 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:21:59.534 18:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:59.823 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:00.086 Waiting for block devices as requested 00:22:00.086 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.345 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.345 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:22:00.603 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:22:05.872 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:22:05.872 18:49:34 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:22:05.872 18:49:34 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:22:05.872 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:22:05.872 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:22:05.872 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:22:05.872 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:22:05.872 18:49:34 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:22:05.872 ************************************ 00:22:05.872 END TEST blockdev_nvme_gpt 00:22:05.872 ************************************ 00:22:05.872 00:22:05.872 real 1m14.937s 00:22:05.872 user 1m34.363s 00:22:05.872 sys 0m13.384s 00:22:05.872 18:49:34 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:05.872 18:49:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:22:05.872 18:49:34 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:22:05.872 18:49:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:05.872 18:49:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:05.872 18:49:34 -- common/autotest_common.sh@10 -- # set +x 00:22:05.872 ************************************ 00:22:05.872 START TEST nvme 00:22:05.872 ************************************ 00:22:05.872 18:49:34 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:22:06.131 * Looking for test storage... 00:22:06.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:06.131 18:49:34 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:06.131 18:49:34 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:06.131 18:49:34 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:22:06.131 18:49:34 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:06.131 18:49:34 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.131 18:49:34 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.131 18:49:34 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.131 18:49:34 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.131 18:49:34 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.131 18:49:34 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.131 18:49:34 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.131 18:49:34 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.131 18:49:34 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.131 18:49:34 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.131 18:49:34 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.131 18:49:34 nvme -- scripts/common.sh@344 -- # case "$op" in 00:22:06.131 18:49:34 nvme -- scripts/common.sh@345 -- # : 1 00:22:06.131 18:49:34 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.131 18:49:34 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.131 18:49:34 nvme -- scripts/common.sh@365 -- # decimal 1 00:22:06.131 18:49:34 nvme -- scripts/common.sh@353 -- # local d=1 00:22:06.131 18:49:34 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.131 18:49:34 nvme -- scripts/common.sh@355 -- # echo 1 00:22:06.131 18:49:34 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.131 18:49:34 nvme -- scripts/common.sh@366 -- # decimal 2 00:22:06.131 18:49:34 nvme -- scripts/common.sh@353 -- # local d=2 00:22:06.131 18:49:34 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.131 18:49:34 nvme -- scripts/common.sh@355 -- # echo 2 00:22:06.131 18:49:34 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.131 18:49:34 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.131 18:49:34 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.131 18:49:34 nvme -- scripts/common.sh@368 -- # return 0 00:22:06.131 18:49:34 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.131 18:49:34 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.131 --rc genhtml_branch_coverage=1 00:22:06.131 --rc genhtml_function_coverage=1 00:22:06.131 --rc genhtml_legend=1 00:22:06.131 --rc geninfo_all_blocks=1 00:22:06.131 --rc geninfo_unexecuted_blocks=1 00:22:06.132 00:22:06.132 ' 00:22:06.132 18:49:34 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:06.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.132 --rc genhtml_branch_coverage=1 00:22:06.132 --rc genhtml_function_coverage=1 00:22:06.132 --rc genhtml_legend=1 00:22:06.132 --rc geninfo_all_blocks=1 00:22:06.132 --rc geninfo_unexecuted_blocks=1 00:22:06.132 00:22:06.132 ' 00:22:06.132 18:49:34 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:06.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.132 --rc genhtml_branch_coverage=1 00:22:06.132 --rc genhtml_function_coverage=1 00:22:06.132 --rc genhtml_legend=1 00:22:06.132 --rc geninfo_all_blocks=1 00:22:06.132 --rc geninfo_unexecuted_blocks=1 00:22:06.132 00:22:06.132 ' 00:22:06.132 18:49:34 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:06.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.132 --rc genhtml_branch_coverage=1 00:22:06.132 --rc genhtml_function_coverage=1 00:22:06.132 --rc genhtml_legend=1 00:22:06.132 --rc geninfo_all_blocks=1 00:22:06.132 --rc geninfo_unexecuted_blocks=1 00:22:06.132 00:22:06.132 ' 00:22:06.132 18:49:34 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:06.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:07.633 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:07.633 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:07.633 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:07.633 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:07.633 18:49:36 nvme -- nvme/nvme.sh@79 -- # uname 00:22:07.633 18:49:36 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:22:07.633 18:49:36 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:22:07.633 18:49:36 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1071 -- # stubpid=65143 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:22:07.633 Waiting for stub to ready for secondary processes... 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/65143 ]] 00:22:07.633 18:49:36 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:22:07.890 [2024-10-08 18:49:36.423766] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:22:07.891 [2024-10-08 18:49:36.424036] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:22:08.824 18:49:37 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:22:08.824 18:49:37 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/65143 ]] 00:22:08.824 18:49:37 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:22:09.082 [2024-10-08 18:49:37.614586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:09.341 [2024-10-08 18:49:37.959673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.341 [2024-10-08 18:49:37.959684] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.341 [2024-10-08 18:49:37.959698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.341 [2024-10-08 18:49:37.986495] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:22:09.341 [2024-10-08 18:49:37.986803] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:22:09.341 [2024-10-08 18:49:37.999823] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:22:09.341 [2024-10-08 18:49:38.000175] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:22:09.341 [2024-10-08 18:49:38.005913] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:22:09.341 [2024-10-08 18:49:38.006441] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:22:09.342 [2024-10-08 18:49:38.006655] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:22:09.342 [2024-10-08 18:49:38.012511] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:22:09.342 [2024-10-08 18:49:38.012896] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:22:09.342 [2024-10-08 18:49:38.013178] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:22:09.342 [2024-10-08 18:49:38.017464] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:22:09.342 [2024-10-08 18:49:38.017881] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:22:09.342 [2024-10-08 18:49:38.018170] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:22:09.342 [2024-10-08 18:49:38.018402] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:22:09.342 [2024-10-08 18:49:38.018602] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:22:09.908 18:49:38 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:22:09.908 18:49:38 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:22:09.908 done. 00:22:09.908 18:49:38 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:22:09.908 18:49:38 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:22:09.908 18:49:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:09.908 18:49:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:09.908 ************************************ 00:22:09.908 START TEST nvme_reset 00:22:09.908 ************************************ 00:22:09.908 18:49:38 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:22:10.166 Initializing NVMe Controllers 00:22:10.166 Skipping QEMU NVMe SSD at 0000:00:10.0 00:22:10.166 Skipping QEMU NVMe SSD at 0000:00:11.0 00:22:10.166 Skipping QEMU NVMe SSD at 0000:00:13.0 00:22:10.166 Skipping QEMU NVMe SSD at 0000:00:12.0 00:22:10.166 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:22:10.166 ************************************ 00:22:10.166 END TEST nvme_reset 00:22:10.166 ************************************ 00:22:10.166 00:22:10.166 real 0m0.403s 00:22:10.166 user 0m0.118s 00:22:10.166 sys 0m0.209s 00:22:10.166 18:49:38 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.166 18:49:38 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:22:10.166 18:49:38 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:22:10.166 18:49:38 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:10.166 18:49:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.166 18:49:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:10.166 ************************************ 00:22:10.166 START TEST nvme_identify 00:22:10.166 ************************************ 00:22:10.166 18:49:38 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:22:10.166 18:49:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:22:10.166 18:49:38 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:22:10.166 18:49:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:22:10.166 18:49:38 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:22:10.166 18:49:38 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:22:10.166 18:49:38 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:22:10.166 18:49:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:10.166 18:49:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:10.166 18:49:38 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:22:10.166 18:49:38 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:22:10.166 18:49:38 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:10.166 18:49:38 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:22:10.736 [2024-10-08 18:49:39.200329] nvme_ctrlr.c:3659:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 65176 terminated unexpected 00:22:10.736 ===================================================== 00:22:10.736 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:10.736 ===================================================== 00:22:10.736 Controller Capabilities/Features 00:22:10.736 ================================ 00:22:10.736 Vendor ID: 1b36 00:22:10.736 Subsystem Vendor ID: 1af4 00:22:10.736 Serial Number: 12340 00:22:10.736 Model Number: QEMU NVMe Ctrl 00:22:10.736 Firmware Version: 8.0.0 00:22:10.736 Recommended Arb Burst: 6 00:22:10.736 IEEE OUI Identifier: 00 54 52 00:22:10.736 Multi-path I/O 00:22:10.736 May have multiple subsystem ports: No 00:22:10.736 May have multiple controllers: No 00:22:10.736 Associated with SR-IOV VF: No 00:22:10.736 Max Data Transfer Size: 524288 00:22:10.736 Max Number of Namespaces: 256 00:22:10.736 Max Number of I/O Queues: 64 00:22:10.736 NVMe Specification Version (VS): 1.4 00:22:10.736 NVMe Specification Version (Identify): 1.4 00:22:10.736 Maximum Queue Entries: 2048 00:22:10.736 Contiguous Queues Required: Yes 00:22:10.736 Arbitration Mechanisms Supported 00:22:10.736 Weighted Round Robin: Not Supported 00:22:10.736 Vendor Specific: Not Supported 00:22:10.736 Reset Timeout: 7500 ms 00:22:10.736 Doorbell Stride: 4 bytes 00:22:10.736 NVM Subsystem Reset: Not Supported 00:22:10.736 Command Sets Supported 00:22:10.736 NVM Command Set: Supported 00:22:10.736 Boot Partition: Not Supported 00:22:10.736 Memory Page Size Minimum: 4096 bytes 00:22:10.736 Memory Page Size Maximum: 65536 bytes 00:22:10.736 Persistent Memory Region: Not Supported 00:22:10.736 Optional Asynchronous Events Supported 00:22:10.736 Namespace Attribute Notices: Supported 00:22:10.736 Firmware Activation Notices: Not Supported 00:22:10.736 ANA Change Notices: Not Supported 00:22:10.736 PLE Aggregate Log Change Notices: Not Supported 00:22:10.736 LBA Status Info Alert Notices: Not Supported 00:22:10.736 EGE Aggregate Log Change Notices: Not Supported 00:22:10.736 Normal NVM Subsystem Shutdown event: Not Supported 00:22:10.736 Zone Descriptor Change Notices: Not Supported 00:22:10.736 Discovery Log Change Notices: Not Supported 00:22:10.736 Controller Attributes 00:22:10.736 128-bit Host Identifier: Not Supported 00:22:10.736 Non-Operational Permissive Mode: Not Supported 00:22:10.736 NVM Sets: Not Supported 00:22:10.736 Read Recovery Levels: Not Supported 00:22:10.736 Endurance Groups: Not Supported 00:22:10.736 Predictable Latency Mode: Not Supported 00:22:10.736 Traffic Based Keep ALive: Not Supported 00:22:10.736 Namespace Granularity: Not Supported 00:22:10.736 SQ Associations: Not Supported 00:22:10.736 UUID List: Not Supported 00:22:10.736 Multi-Domain Subsystem: Not Supported 00:22:10.736 Fixed Capacity Management: Not Supported 00:22:10.736 Variable Capacity Management: Not Supported 00:22:10.736 Delete Endurance Group: Not Supported 00:22:10.736 Delete NVM Set: Not Supported 00:22:10.736 Extended LBA Formats Supported: Supported 00:22:10.736 Flexible Data Placement Supported: Not Supported 00:22:10.736 00:22:10.736 Controller Memory Buffer Support 00:22:10.736 ================================ 00:22:10.736 Supported: No 00:22:10.736 00:22:10.736 Persistent Memory Region Support 00:22:10.736 ================================ 00:22:10.736 Supported: No 00:22:10.736 00:22:10.736 Admin Command Set Attributes 00:22:10.736 ============================ 00:22:10.736 Security Send/Receive: Not Supported 00:22:10.736 Format NVM: Supported 00:22:10.736 Firmware Activate/Download: Not Supported 00:22:10.736 Namespace Management: Supported 00:22:10.736 Device Self-Test: Not Supported 00:22:10.736 Directives: Supported 00:22:10.736 NVMe-MI: Not Supported 00:22:10.736 Virtualization Management: Not Supported 00:22:10.736 Doorbell Buffer Config: Supported 00:22:10.736 Get LBA Status Capability: Not Supported 00:22:10.736 Command & Feature Lockdown Capability: Not Supported 00:22:10.736 Abort Command Limit: 4 00:22:10.736 Async Event Request Limit: 4 00:22:10.736 Number of Firmware Slots: N/A 00:22:10.736 Firmware Slot 1 Read-Only: N/A 00:22:10.736 Firmware Activation Without Reset: N/A 00:22:10.736 Multiple Update Detection Support: N/A 00:22:10.736 Firmware Update Granularity: No Information Provided 00:22:10.736 Per-Namespace SMART Log: Yes 00:22:10.736 Asymmetric Namespace Access Log Page: Not Supported 00:22:10.736 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:22:10.736 Command Effects Log Page: Supported 00:22:10.736 Get Log Page Extended Data: Supported 00:22:10.736 Telemetry Log Pages: Not Supported 00:22:10.736 Persistent Event Log Pages: Not Supported 00:22:10.736 Supported Log Pages Log Page: May Support 00:22:10.736 Commands Supported & Effects Log Page: Not Supported 00:22:10.736 Feature Identifiers & Effects Log Page:May Support 00:22:10.736 NVMe-MI Commands & Effects Log Page: May Support 00:22:10.736 Data Area 4 for Telemetry Log: Not Supported 00:22:10.736 Error Log Page Entries Supported: 1 00:22:10.736 Keep Alive: Not Supported 00:22:10.736 00:22:10.736 NVM Command Set Attributes 00:22:10.736 ========================== 00:22:10.736 Submission Queue Entry Size 00:22:10.736 Max: 64 00:22:10.736 Min: 64 00:22:10.736 Completion Queue Entry Size 00:22:10.736 Max: 16 00:22:10.736 Min: 16 00:22:10.736 Number of Namespaces: 256 00:22:10.736 Compare Command: Supported 00:22:10.736 Write Uncorrectable Command: Not Supported 00:22:10.736 Dataset Management Command: Supported 00:22:10.736 Write Zeroes Command: Supported 00:22:10.736 Set Features Save Field: Supported 00:22:10.736 Reservations: Not Supported 00:22:10.736 Timestamp: Supported 00:22:10.736 Copy: Supported 00:22:10.736 Volatile Write Cache: Present 00:22:10.736 Atomic Write Unit (Normal): 1 00:22:10.736 Atomic Write Unit (PFail): 1 00:22:10.736 Atomic Compare & Write Unit: 1 00:22:10.736 Fused Compare & Write: Not Supported 00:22:10.736 Scatter-Gather List 00:22:10.736 SGL Command Set: Supported 00:22:10.736 SGL Keyed: Not Supported 00:22:10.736 SGL Bit Bucket Descriptor: Not Supported 00:22:10.736 SGL Metadata Pointer: Not Supported 00:22:10.736 Oversized SGL: Not Supported 00:22:10.736 SGL Metadata Address: Not Supported 00:22:10.736 SGL Offset: Not Supported 00:22:10.736 Transport SGL Data Block: Not Supported 00:22:10.736 Replay Protected Memory Block: Not Supported 00:22:10.736 00:22:10.736 Firmware Slot Information 00:22:10.736 ========================= 00:22:10.736 Active slot: 1 00:22:10.736 Slot 1 Firmware Revision: 1.0 00:22:10.736 00:22:10.736 00:22:10.736 Commands Supported and Effects 00:22:10.736 ============================== 00:22:10.736 Admin Commands 00:22:10.736 -------------- 00:22:10.736 Delete I/O Submission Queue (00h): Supported 00:22:10.736 Create I/O Submission Queue (01h): Supported 00:22:10.736 Get Log Page (02h): Supported 00:22:10.736 Delete I/O Completion Queue (04h): Supported 00:22:10.736 Create I/O Completion Queue (05h): Supported 00:22:10.736 Identify (06h): Supported 00:22:10.736 Abort (08h): Supported 00:22:10.736 Set Features (09h): Supported 00:22:10.736 Get Features (0Ah): Supported 00:22:10.736 Asynchronous Event Request (0Ch): Supported 00:22:10.736 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:10.736 Directive Send (19h): Supported 00:22:10.736 Directive Receive (1Ah): Supported 00:22:10.736 Virtualization Management (1Ch): Supported 00:22:10.736 Doorbell Buffer Config (7Ch): Supported 00:22:10.736 Format NVM (80h): Supported LBA-Change 00:22:10.736 I/O Commands 00:22:10.736 ------------ 00:22:10.736 Flush (00h): Supported LBA-Change 00:22:10.736 Write (01h): Supported LBA-Change 00:22:10.736 Read (02h): Supported 00:22:10.736 Compare (05h): Supported 00:22:10.737 Write Zeroes (08h): Supported LBA-Change 00:22:10.737 Dataset Management (09h): Supported LBA-Change 00:22:10.737 Unknown (0Ch): Supported 00:22:10.737 Unknown (12h): Supported 00:22:10.737 Copy (19h): Supported LBA-Change 00:22:10.737 Unknown (1Dh): Supported LBA-Change 00:22:10.737 00:22:10.737 Error Log 00:22:10.737 ========= 00:22:10.737 00:22:10.737 Arbitration 00:22:10.737 =========== 00:22:10.737 Arbitration Burst: no limit 00:22:10.737 00:22:10.737 Power Management 00:22:10.737 ================ 00:22:10.737 Number of Power States: 1 00:22:10.737 Current Power State: Power State #0 00:22:10.737 Power State #0: 00:22:10.737 Max Power: 25.00 W 00:22:10.737 Non-Operational State: Operational 00:22:10.737 Entry Latency: 16 microseconds 00:22:10.737 Exit Latency: 4 microseconds 00:22:10.737 Relative Read Throughput: 0 00:22:10.737 Relative Read Latency: 0 00:22:10.737 Relative Write Throughput: 0 00:22:10.737 Relative Write Latency: 0 00:22:10.737 Idle Power[2024-10-08 18:49:39.202297] nvme_ctrlr.c:3659:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 65176 terminated: Not Reported 00:22:10.737 Active Power: Not Reported 00:22:10.737 Non-Operational Permissive Mode: Not Supported 00:22:10.737 00:22:10.737 Health Information 00:22:10.737 ================== 00:22:10.737 Critical Warnings: 00:22:10.737 Available Spare Space: OK 00:22:10.737 Temperature: OK 00:22:10.737 Device Reliability: OK 00:22:10.737 Read Only: No 00:22:10.737 Volatile Memory Backup: OK 00:22:10.737 Current Temperature: 323 Kelvin (50 Celsius) 00:22:10.737 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:10.737 Available Spare: 0% 00:22:10.737 Available Spare Threshold: 0% 00:22:10.737 Life Percentage Used: 0% 00:22:10.737 Data Units Read: 605 00:22:10.737 Data Units Written: 533 00:22:10.737 Host Read Commands: 30527 00:22:10.737 Host Write Commands: 30329 00:22:10.737 Controller Busy Time: 0 minutes 00:22:10.737 Power Cycles: 0 00:22:10.737 Power On Hours: 0 hours 00:22:10.737 Unsafe Shutdowns: 0 00:22:10.737 Unrecoverable Media Errors: 0 00:22:10.737 Lifetime Error Log Entries: 0 00:22:10.737 Warning Temperature Time: 0 minutes 00:22:10.737 Critical Temperature Time: 0 minutes 00:22:10.737 00:22:10.737 Number of Queues 00:22:10.737 ================ 00:22:10.737 Number of I/O Submission Queues: 64 00:22:10.737 Number of I/O Completion Queues: 64 00:22:10.737 00:22:10.737 ZNS Specific Controller Data 00:22:10.737 ============================ 00:22:10.737 Zone Append Size Limit: 0 00:22:10.737 00:22:10.737 00:22:10.737 Active Namespaces 00:22:10.737 ================= 00:22:10.737 Namespace ID:1 00:22:10.737 Error Recovery Timeout: Unlimited 00:22:10.737 Command Set Identifier: NVM (00h) 00:22:10.737 Deallocate: Supported 00:22:10.737 Deallocated/Unwritten Error: Supported 00:22:10.737 Deallocated Read Value: All 0x00 00:22:10.737 Deallocate in Write Zeroes: Not Supported 00:22:10.737 Deallocated Guard Field: 0xFFFF 00:22:10.737 Flush: Supported 00:22:10.737 Reservation: Not Supported 00:22:10.737 Metadata Transferred as: Separate Metadata Buffer 00:22:10.737 Namespace Sharing Capabilities: Private 00:22:10.737 Size (in LBAs): 1548666 (5GiB) 00:22:10.737 Capacity (in LBAs): 1548666 (5GiB) 00:22:10.737 Utilization (in LBAs): 1548666 (5GiB) 00:22:10.737 Thin Provisioning: Not Supported 00:22:10.737 Per-NS Atomic Units: No 00:22:10.737 Maximum Single Source Range Length: 128 00:22:10.737 Maximum Copy Length: 128 00:22:10.737 Maximum Source Range Count: 128 00:22:10.737 NGUID/EUI64 Never Reused: No 00:22:10.737 Namespace Write Protected: No 00:22:10.737 Number of LBA Formats: 8 00:22:10.737 Current LBA Format: LBA Format #07 00:22:10.737 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:10.737 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:10.737 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:10.737 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:10.737 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:10.737 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:10.737 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:10.737 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:10.737 00:22:10.737 NVM Specific Namespace Data 00:22:10.737 =========================== 00:22:10.737 Logical Block Storage Tag Mask: 0 00:22:10.737 Protection Information Capabilities: 00:22:10.737 16b Guard Protection Information Storage Tag Support: No 00:22:10.737 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:10.737 Storage Tag Check Read Support: No 00:22:10.737 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.737 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.737 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.737 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.737 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.737 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.737 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.737 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.737 ===================================================== 00:22:10.737 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:10.737 ===================================================== 00:22:10.737 Controller Capabilities/Features 00:22:10.737 ================================ 00:22:10.737 Vendor ID: 1b36 00:22:10.737 Subsystem Vendor ID: 1af4 00:22:10.737 Serial Number: 12341 00:22:10.737 Model Number: QEMU NVMe Ctrl 00:22:10.737 Firmware Version: 8.0.0 00:22:10.737 Recommended Arb Burst: 6 00:22:10.737 IEEE OUI Identifier: 00 54 52 00:22:10.737 Multi-path I/O 00:22:10.737 May have multiple subsystem ports: No 00:22:10.737 May have multiple controllers: No 00:22:10.737 Associated with SR-IOV VF: No 00:22:10.737 Max Data Transfer Size: 524288 00:22:10.737 Max Number of Namespaces: 256 00:22:10.737 Max Number of I/O Queues: 64 00:22:10.737 NVMe Specification Version (VS): 1.4 00:22:10.737 NVMe Specification Version (Identify): 1.4 00:22:10.737 Maximum Queue Entries: 2048 00:22:10.737 Contiguous Queues Required: Yes 00:22:10.737 Arbitration Mechanisms Supported 00:22:10.737 Weighted Round Robin: Not Supported 00:22:10.737 Vendor Specific: Not Supported 00:22:10.737 Reset Timeout: 7500 ms 00:22:10.737 Doorbell Stride: 4 bytes 00:22:10.737 NVM Subsystem Reset: Not Supported 00:22:10.737 Command Sets Supported 00:22:10.737 NVM Command Set: Supported 00:22:10.737 Boot Partition: Not Supported 00:22:10.737 Memory Page Size Minimum: 4096 bytes 00:22:10.737 Memory Page Size Maximum: 65536 bytes 00:22:10.737 Persistent Memory Region: Not Supported 00:22:10.737 Optional Asynchronous Events Supported 00:22:10.737 Namespace Attribute Notices: Supported 00:22:10.737 Firmware Activation Notices: Not Supported 00:22:10.737 ANA Change Notices: Not Supported 00:22:10.737 PLE Aggregate Log Change Notices: Not Supported 00:22:10.737 LBA Status Info Alert Notices: Not Supported 00:22:10.737 EGE Aggregate Log Change Notices: Not Supported 00:22:10.737 Normal NVM Subsystem Shutdown event: Not Supported 00:22:10.737 Zone Descriptor Change Notices: Not Supported 00:22:10.737 Discovery Log Change Notices: Not Supported 00:22:10.737 Controller Attributes 00:22:10.737 128-bit Host Identifier: Not Supported 00:22:10.737 Non-Operational Permissive Mode: Not Supported 00:22:10.737 NVM Sets: Not Supported 00:22:10.737 Read Recovery Levels: Not Supported 00:22:10.737 Endurance Groups: Not Supported 00:22:10.737 Predictable Latency Mode: Not Supported 00:22:10.737 Traffic Based Keep ALive: Not Supported 00:22:10.737 Namespace Granularity: Not Supported 00:22:10.737 SQ Associations: Not Supported 00:22:10.737 UUID List: Not Supported 00:22:10.737 Multi-Domain Subsystem: Not Supported 00:22:10.737 Fixed Capacity Management: Not Supported 00:22:10.737 Variable Capacity Management: Not Supported 00:22:10.737 Delete Endurance Group: Not Supported 00:22:10.737 Delete NVM Set: Not Supported 00:22:10.737 Extended LBA Formats Supported: Supported 00:22:10.737 Flexible Data Placement Supported: Not Supported 00:22:10.737 00:22:10.737 Controller Memory Buffer Support 00:22:10.737 ================================ 00:22:10.737 Supported: No 00:22:10.737 00:22:10.737 Persistent Memory Region Support 00:22:10.737 ================================ 00:22:10.737 Supported: No 00:22:10.737 00:22:10.737 Admin Command Set Attributes 00:22:10.737 ============================ 00:22:10.737 Security Send/Receive: Not Supported 00:22:10.737 Format NVM: Supported 00:22:10.737 Firmware Activate/Download: Not Supported 00:22:10.737 Namespace Management: Supported 00:22:10.737 Device Self-Test: Not Supported 00:22:10.737 Directives: Supported 00:22:10.737 NVMe-MI: Not Supported 00:22:10.737 Virtualization Management: Not Supported 00:22:10.737 Doorbell Buffer Config: Supported 00:22:10.737 Get LBA Status Capability: Not Supported 00:22:10.737 Command & Feature Lockdown Capability: Not Supported 00:22:10.738 Abort Command Limit: 4 00:22:10.738 Async Event Request Limit: 4 00:22:10.738 Number of Firmware Slots: N/A 00:22:10.738 Firmware Slot 1 Read-Only: N/A 00:22:10.738 Firmware Activation Without Reset: N/A 00:22:10.738 Multiple Update Detection Support: N/A 00:22:10.738 Firmware Update Granularity: No Information Provided 00:22:10.738 Per-Namespace SMART Log: Yes 00:22:10.738 Asymmetric Namespace Access Log Page: Not Supported 00:22:10.738 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:22:10.738 Command Effects Log Page: Supported 00:22:10.738 Get Log Page Extended Data: Supported 00:22:10.738 Telemetry Log Pages: Not Supported 00:22:10.738 Persistent Event Log Pages: Not Supported 00:22:10.738 Supported Log Pages Log Page: May Support 00:22:10.738 Commands Supported & Effects Log Page: Not Supported 00:22:10.738 Feature Identifiers & Effects Log Page:May Support 00:22:10.738 NVMe-MI Commands & Effects Log Page: May Support 00:22:10.738 Data Area 4 for Telemetry Log: Not Supported 00:22:10.738 Error Log Page Entries Supported: 1 00:22:10.738 Keep Alive: Not Supported 00:22:10.738 00:22:10.738 NVM Command Set Attributes 00:22:10.738 ========================== 00:22:10.738 Submission Queue Entry Size 00:22:10.738 Max: 64 00:22:10.738 Min: 64 00:22:10.738 Completion Queue Entry Size 00:22:10.738 Max: 16 00:22:10.738 Min: 16 00:22:10.738 Number of Namespaces: 256 00:22:10.738 Compare Command: Supported 00:22:10.738 Write Uncorrectable Command: Not Supported 00:22:10.738 Dataset Management Command: Supported 00:22:10.738 Write Zeroes Command: Supported 00:22:10.738 Set Features Save Field: Supported 00:22:10.738 Reservations: Not Supported 00:22:10.738 Timestamp: Supported 00:22:10.738 Copy: Supported 00:22:10.738 Volatile Write Cache: Present 00:22:10.738 Atomic Write Unit (Normal): 1 00:22:10.738 Atomic Write Unit (PFail): 1 00:22:10.738 Atomic Compare & Write Unit: 1 00:22:10.738 Fused Compare & Write: Not Supported 00:22:10.738 Scatter-Gather List 00:22:10.738 SGL Command Set: Supported 00:22:10.738 SGL Keyed: Not Supported 00:22:10.738 SGL Bit Bucket Descriptor: Not Supported 00:22:10.738 SGL Metadata Pointer: Not Supported 00:22:10.738 Oversized SGL: Not Supported 00:22:10.738 SGL Metadata Address: Not Supported 00:22:10.738 SGL Offset: Not Supported 00:22:10.738 Transport SGL Data Block: Not Supported 00:22:10.738 Replay Protected Memory Block: Not Supported 00:22:10.738 00:22:10.738 Firmware Slot Information 00:22:10.738 ========================= 00:22:10.738 Active slot: 1 00:22:10.738 Slot 1 Firmware Revision: 1.0 00:22:10.738 00:22:10.738 00:22:10.738 Commands Supported and Effects 00:22:10.738 ============================== 00:22:10.738 Admin Commands 00:22:10.738 -------------- 00:22:10.738 Delete I/O Submission Queue (00h): Supported 00:22:10.738 Create I/O Submission Queue (01h): Supported 00:22:10.738 Get Log Page (02h): Supported 00:22:10.738 Delete I/O Completion Queue (04h): Supported 00:22:10.738 Create I/O Completion Queue (05h): Supported 00:22:10.738 Identify (06h): Supported 00:22:10.738 Abort (08h): Supported 00:22:10.738 Set Features (09h): Supported 00:22:10.738 Get Features (0Ah): Supported 00:22:10.738 Asynchronous Event Request (0Ch): Supported 00:22:10.738 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:10.738 Directive Send (19h): Supported 00:22:10.738 Directive Receive (1Ah): Supported 00:22:10.738 Virtualization Management (1Ch): Supported 00:22:10.738 Doorbell Buffer Config (7Ch): Supported 00:22:10.738 Format NVM (80h): Supported LBA-Change 00:22:10.738 I/O Commands 00:22:10.738 ------------ 00:22:10.738 Flush (00h): Supported LBA-Change 00:22:10.738 Write (01h): Supported LBA-Change 00:22:10.738 Read (02h): Supported 00:22:10.738 Compare (05h): Supported 00:22:10.738 Write Zeroes (08h): Supported LBA-Change 00:22:10.738 Dataset Management (09h): Supported LBA-Change 00:22:10.738 Unknown (0Ch): Supported 00:22:10.738 Unknown (12h): Supported 00:22:10.738 Copy (19h): Supported LBA-Change 00:22:10.738 Unknown (1Dh): Supported LBA-Change 00:22:10.738 00:22:10.738 Error Log 00:22:10.738 ========= 00:22:10.738 00:22:10.738 Arbitration 00:22:10.738 =========== 00:22:10.738 Arbitration Burst: no limit 00:22:10.738 00:22:10.738 Power Management 00:22:10.738 ================ 00:22:10.738 Number of Power States: 1 00:22:10.738 Current Power State: Power State #0 00:22:10.738 Power State #0: 00:22:10.738 Max Power: 25.00 W 00:22:10.738 Non-Operational State: Operational 00:22:10.738 Entry Latency: 16 microseconds 00:22:10.738 Exit Latency: 4 microseconds 00:22:10.738 Relative Read Throughput: 0 00:22:10.738 Relative Read Latency: 0 00:22:10.738 Relative Write Throughput: 0 00:22:10.738 Relative Write Latency: 0 00:22:10.738 Idle Power: Not Reported 00:22:10.738 Active Power: Not Reported 00:22:10.738 Non-Operational Permissive Mode: Not Supported 00:22:10.738 00:22:10.738 Health Information 00:22:10.738 ================== 00:22:10.738 Critical Warnings: 00:22:10.738 Available Spare Space: OK 00:22:10.738 Temperature: unexpected 00:22:10.738 [2024-10-08 18:49:39.203521] nvme_ctrlr.c:3659:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 65176 terminated unexpected 00:22:10.738 OK 00:22:10.738 Device Reliability: OK 00:22:10.738 Read Only: No 00:22:10.738 Volatile Memory Backup: OK 00:22:10.738 Current Temperature: 323 Kelvin (50 Celsius) 00:22:10.738 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:10.738 Available Spare: 0% 00:22:10.738 Available Spare Threshold: 0% 00:22:10.738 Life Percentage Used: 0% 00:22:10.738 Data Units Read: 903 00:22:10.738 Data Units Written: 763 00:22:10.738 Host Read Commands: 45418 00:22:10.738 Host Write Commands: 44129 00:22:10.738 Controller Busy Time: 0 minutes 00:22:10.738 Power Cycles: 0 00:22:10.738 Power On Hours: 0 hours 00:22:10.738 Unsafe Shutdowns: 0 00:22:10.738 Unrecoverable Media Errors: 0 00:22:10.738 Lifetime Error Log Entries: 0 00:22:10.738 Warning Temperature Time: 0 minutes 00:22:10.738 Critical Temperature Time: 0 minutes 00:22:10.738 00:22:10.738 Number of Queues 00:22:10.738 ================ 00:22:10.738 Number of I/O Submission Queues: 64 00:22:10.738 Number of I/O Completion Queues: 64 00:22:10.738 00:22:10.738 ZNS Specific Controller Data 00:22:10.738 ============================ 00:22:10.738 Zone Append Size Limit: 0 00:22:10.738 00:22:10.738 00:22:10.738 Active Namespaces 00:22:10.738 ================= 00:22:10.738 Namespace ID:1 00:22:10.738 Error Recovery Timeout: Unlimited 00:22:10.738 Command Set Identifier: NVM (00h) 00:22:10.738 Deallocate: Supported 00:22:10.738 Deallocated/Unwritten Error: Supported 00:22:10.738 Deallocated Read Value: All 0x00 00:22:10.738 Deallocate in Write Zeroes: Not Supported 00:22:10.738 Deallocated Guard Field: 0xFFFF 00:22:10.738 Flush: Supported 00:22:10.738 Reservation: Not Supported 00:22:10.738 Namespace Sharing Capabilities: Private 00:22:10.738 Size (in LBAs): 1310720 (5GiB) 00:22:10.738 Capacity (in LBAs): 1310720 (5GiB) 00:22:10.738 Utilization (in LBAs): 1310720 (5GiB) 00:22:10.738 Thin Provisioning: Not Supported 00:22:10.738 Per-NS Atomic Units: No 00:22:10.738 Maximum Single Source Range Length: 128 00:22:10.738 Maximum Copy Length: 128 00:22:10.738 Maximum Source Range Count: 128 00:22:10.738 NGUID/EUI64 Never Reused: No 00:22:10.738 Namespace Write Protected: No 00:22:10.738 Number of LBA Formats: 8 00:22:10.738 Current LBA Format: LBA Format #04 00:22:10.738 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:10.738 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:10.738 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:10.738 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:10.738 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:10.738 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:10.738 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:10.738 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:10.738 00:22:10.738 NVM Specific Namespace Data 00:22:10.738 =========================== 00:22:10.738 Logical Block Storage Tag Mask: 0 00:22:10.738 Protection Information Capabilities: 00:22:10.738 16b Guard Protection Information Storage Tag Support: No 00:22:10.738 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:10.738 Storage Tag Check Read Support: No 00:22:10.738 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.738 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.738 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.738 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.738 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.738 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.738 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.738 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.738 ===================================================== 00:22:10.738 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:10.738 ===================================================== 00:22:10.738 Controller Capabilities/Features 00:22:10.738 ================================ 00:22:10.738 Vendor ID: 1b36 00:22:10.738 Subsystem Vendor ID: 1af4 00:22:10.738 Serial Number: 12343 00:22:10.738 Model Number: QEMU NVMe Ctrl 00:22:10.739 Firmware Version: 8.0.0 00:22:10.739 Recommended Arb Burst: 6 00:22:10.739 IEEE OUI Identifier: 00 54 52 00:22:10.739 Multi-path I/O 00:22:10.739 May have multiple subsystem ports: No 00:22:10.739 May have multiple controllers: Yes 00:22:10.739 Associated with SR-IOV VF: No 00:22:10.739 Max Data Transfer Size: 524288 00:22:10.739 Max Number of Namespaces: 256 00:22:10.739 Max Number of I/O Queues: 64 00:22:10.739 NVMe Specification Version (VS): 1.4 00:22:10.739 NVMe Specification Version (Identify): 1.4 00:22:10.739 Maximum Queue Entries: 2048 00:22:10.739 Contiguous Queues Required: Yes 00:22:10.739 Arbitration Mechanisms Supported 00:22:10.739 Weighted Round Robin: Not Supported 00:22:10.739 Vendor Specific: Not Supported 00:22:10.739 Reset Timeout: 7500 ms 00:22:10.739 Doorbell Stride: 4 bytes 00:22:10.739 NVM Subsystem Reset: Not Supported 00:22:10.739 Command Sets Supported 00:22:10.739 NVM Command Set: Supported 00:22:10.739 Boot Partition: Not Supported 00:22:10.739 Memory Page Size Minimum: 4096 bytes 00:22:10.739 Memory Page Size Maximum: 65536 bytes 00:22:10.739 Persistent Memory Region: Not Supported 00:22:10.739 Optional Asynchronous Events Supported 00:22:10.739 Namespace Attribute Notices: Supported 00:22:10.739 Firmware Activation Notices: Not Supported 00:22:10.739 ANA Change Notices: Not Supported 00:22:10.739 PLE Aggregate Log Change Notices: Not Supported 00:22:10.739 LBA Status Info Alert Notices: Not Supported 00:22:10.739 EGE Aggregate Log Change Notices: Not Supported 00:22:10.739 Normal NVM Subsystem Shutdown event: Not Supported 00:22:10.739 Zone Descriptor Change Notices: Not Supported 00:22:10.739 Discovery Log Change Notices: Not Supported 00:22:10.739 Controller Attributes 00:22:10.739 128-bit Host Identifier: Not Supported 00:22:10.739 Non-Operational Permissive Mode: Not Supported 00:22:10.739 NVM Sets: Not Supported 00:22:10.739 Read Recovery Levels: Not Supported 00:22:10.739 Endurance Groups: Supported 00:22:10.739 Predictable Latency Mode: Not Supported 00:22:10.739 Traffic Based Keep ALive: Not Supported 00:22:10.739 Namespace Granularity: Not Supported 00:22:10.739 SQ Associations: Not Supported 00:22:10.739 UUID List: Not Supported 00:22:10.739 Multi-Domain Subsystem: Not Supported 00:22:10.739 Fixed Capacity Management: Not Supported 00:22:10.739 Variable Capacity Management: Not Supported 00:22:10.739 Delete Endurance Group: Not Supported 00:22:10.739 Delete NVM Set: Not Supported 00:22:10.739 Extended LBA Formats Supported: Supported 00:22:10.739 Flexible Data Placement Supported: Supported 00:22:10.739 00:22:10.739 Controller Memory Buffer Support 00:22:10.739 ================================ 00:22:10.739 Supported: No 00:22:10.739 00:22:10.739 Persistent Memory Region Support 00:22:10.739 ================================ 00:22:10.739 Supported: No 00:22:10.739 00:22:10.739 Admin Command Set Attributes 00:22:10.739 ============================ 00:22:10.739 Security Send/Receive: Not Supported 00:22:10.739 Format NVM: Supported 00:22:10.739 Firmware Activate/Download: Not Supported 00:22:10.739 Namespace Management: Supported 00:22:10.739 Device Self-Test: Not Supported 00:22:10.739 Directives: Supported 00:22:10.739 NVMe-MI: Not Supported 00:22:10.739 Virtualization Management: Not Supported 00:22:10.739 Doorbell Buffer Config: Supported 00:22:10.739 Get LBA Status Capability: Not Supported 00:22:10.739 Command & Feature Lockdown Capability: Not Supported 00:22:10.739 Abort Command Limit: 4 00:22:10.739 Async Event Request Limit: 4 00:22:10.739 Number of Firmware Slots: N/A 00:22:10.739 Firmware Slot 1 Read-Only: N/A 00:22:10.739 Firmware Activation Without Reset: N/A 00:22:10.739 Multiple Update Detection Support: N/A 00:22:10.739 Firmware Update Granularity: No Information Provided 00:22:10.739 Per-Namespace SMART Log: Yes 00:22:10.739 Asymmetric Namespace Access Log Page: Not Supported 00:22:10.739 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:22:10.739 Command Effects Log Page: Supported 00:22:10.739 Get Log Page Extended Data: Supported 00:22:10.739 Telemetry Log Pages: Not Supported 00:22:10.739 Persistent Event Log Pages: Not Supported 00:22:10.739 Supported Log Pages Log Page: May Support 00:22:10.739 Commands Supported & Effects Log Page: Not Supported 00:22:10.739 Feature Identifiers & Effects Log Page:May Support 00:22:10.739 NVMe-MI Commands & Effects Log Page: May Support 00:22:10.739 Data Area 4 for Telemetry Log: Not Supported 00:22:10.739 Error Log Page Entries Supported: 1 00:22:10.739 Keep Alive: Not Supported 00:22:10.739 00:22:10.739 NVM Command Set Attributes 00:22:10.739 ========================== 00:22:10.739 Submission Queue Entry Size 00:22:10.739 Max: 64 00:22:10.739 Min: 64 00:22:10.739 Completion Queue Entry Size 00:22:10.739 Max: 16 00:22:10.739 Min: 16 00:22:10.739 Number of Namespaces: 256 00:22:10.739 Compare Command: Supported 00:22:10.739 Write Uncorrectable Command: Not Supported 00:22:10.739 Dataset Management Command: Supported 00:22:10.739 Write Zeroes Command: Supported 00:22:10.739 Set Features Save Field: Supported 00:22:10.739 Reservations: Not Supported 00:22:10.739 Timestamp: Supported 00:22:10.739 Copy: Supported 00:22:10.739 Volatile Write Cache: Present 00:22:10.739 Atomic Write Unit (Normal): 1 00:22:10.739 Atomic Write Unit (PFail): 1 00:22:10.739 Atomic Compare & Write Unit: 1 00:22:10.739 Fused Compare & Write: Not Supported 00:22:10.739 Scatter-Gather List 00:22:10.739 SGL Command Set: Supported 00:22:10.739 SGL Keyed: Not Supported 00:22:10.739 SGL Bit Bucket Descriptor: Not Supported 00:22:10.739 SGL Metadata Pointer: Not Supported 00:22:10.739 Oversized SGL: Not Supported 00:22:10.739 SGL Metadata Address: Not Supported 00:22:10.739 SGL Offset: Not Supported 00:22:10.739 Transport SGL Data Block: Not Supported 00:22:10.739 Replay Protected Memory Block: Not Supported 00:22:10.739 00:22:10.739 Firmware Slot Information 00:22:10.739 ========================= 00:22:10.739 Active slot: 1 00:22:10.739 Slot 1 Firmware Revision: 1.0 00:22:10.739 00:22:10.739 00:22:10.739 Commands Supported and Effects 00:22:10.739 ============================== 00:22:10.739 Admin Commands 00:22:10.739 -------------- 00:22:10.739 Delete I/O Submission Queue (00h): Supported 00:22:10.739 Create I/O Submission Queue (01h): Supported 00:22:10.739 Get Log Page (02h): Supported 00:22:10.739 Delete I/O Completion Queue (04h): Supported 00:22:10.739 Create I/O Completion Queue (05h): Supported 00:22:10.739 Identify (06h): Supported 00:22:10.739 Abort (08h): Supported 00:22:10.739 Set Features (09h): Supported 00:22:10.739 Get Features (0Ah): Supported 00:22:10.739 Asynchronous Event Request (0Ch): Supported 00:22:10.739 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:10.739 Directive Send (19h): Supported 00:22:10.739 Directive Receive (1Ah): Supported 00:22:10.739 Virtualization Management (1Ch): Supported 00:22:10.739 Doorbell Buffer Config (7Ch): Supported 00:22:10.739 Format NVM (80h): Supported LBA-Change 00:22:10.739 I/O Commands 00:22:10.739 ------------ 00:22:10.739 Flush (00h): Supported LBA-Change 00:22:10.739 Write (01h): Supported LBA-Change 00:22:10.739 Read (02h): Supported 00:22:10.739 Compare (05h): Supported 00:22:10.739 Write Zeroes (08h): Supported LBA-Change 00:22:10.739 Dataset Management (09h): Supported LBA-Change 00:22:10.739 Unknown (0Ch): Supported 00:22:10.739 Unknown (12h): Supported 00:22:10.739 Copy (19h): Supported LBA-Change 00:22:10.739 Unknown (1Dh): Supported LBA-Change 00:22:10.739 00:22:10.739 Error Log 00:22:10.739 ========= 00:22:10.739 00:22:10.739 Arbitration 00:22:10.739 =========== 00:22:10.739 Arbitration Burst: no limit 00:22:10.739 00:22:10.739 Power Management 00:22:10.739 ================ 00:22:10.739 Number of Power States: 1 00:22:10.739 Current Power State: Power State #0 00:22:10.739 Power State #0: 00:22:10.739 Max Power: 25.00 W 00:22:10.739 Non-Operational State: Operational 00:22:10.739 Entry Latency: 16 microseconds 00:22:10.739 Exit Latency: 4 microseconds 00:22:10.739 Relative Read Throughput: 0 00:22:10.739 Relative Read Latency: 0 00:22:10.739 Relative Write Throughput: 0 00:22:10.739 Relative Write Latency: 0 00:22:10.739 Idle Power: Not Reported 00:22:10.739 Active Power: Not Reported 00:22:10.739 Non-Operational Permissive Mode: Not Supported 00:22:10.739 00:22:10.739 Health Information 00:22:10.739 ================== 00:22:10.739 Critical Warnings: 00:22:10.739 Available Spare Space: OK 00:22:10.739 Temperature: OK 00:22:10.739 Device Reliability: OK 00:22:10.739 Read Only: No 00:22:10.739 Volatile Memory Backup: OK 00:22:10.739 Current Temperature: 323 Kelvin (50 Celsius) 00:22:10.739 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:10.739 Available Spare: 0% 00:22:10.739 Available Spare Threshold: 0% 00:22:10.739 Life Percentage Used: 0% 00:22:10.739 Data Units Read: 760 00:22:10.739 Data Units Written: 689 00:22:10.739 Host Read Commands: 32115 00:22:10.740 Host Write Commands: 31538 00:22:10.740 Controller Busy Time: 0 minutes 00:22:10.740 Power Cycles: 0 00:22:10.740 Power On Hours: 0 hours 00:22:10.740 Unsafe Shutdowns: 0 00:22:10.740 Unrecoverable Media Errors: 0 00:22:10.740 Lifetime Error Log Entries: 0 00:22:10.740 Warning Temperature Time: 0 minutes 00:22:10.740 Critical Temperature Time: 0 minutes 00:22:10.740 00:22:10.740 Number of Queues 00:22:10.740 ================ 00:22:10.740 Number of I/O Submission Queues: 64 00:22:10.740 Number of I/O Completion Queues: 64 00:22:10.740 00:22:10.740 ZNS Specific Controller Data 00:22:10.740 ============================ 00:22:10.740 Zone Append Size Limit: 0 00:22:10.740 00:22:10.740 00:22:10.740 Active Namespaces 00:22:10.740 ================= 00:22:10.740 Namespace ID:1 00:22:10.740 Error Recovery Timeout: Unlimited 00:22:10.740 Command Set Identifier: NVM (00h) 00:22:10.740 Deallocate: Supported 00:22:10.740 Deallocated/Unwritten Error: Supported 00:22:10.740 Deallocated Read Value: All 0x00 00:22:10.740 Deallocate in Write Zeroes: Not Supported 00:22:10.740 Deallocated Guard Field: 0xFFFF 00:22:10.740 Flush: Supported 00:22:10.740 Reservation: Not Supported 00:22:10.740 Namespace Sharing Capabilities: Multiple Controllers 00:22:10.740 Size (in LBAs): 262144 (1GiB) 00:22:10.740 Capacity (in LBAs): 262144 (1GiB) 00:22:10.740 Utilization (in LBAs): 262144 (1GiB) 00:22:10.740 Thin Provisioning: Not Supported 00:22:10.740 Per-NS Atomic Units: No 00:22:10.740 Maximum Single Source Range Length: 128 00:22:10.740 Maximum Copy Length: 128 00:22:10.740 Maximum Source Range Count: 128 00:22:10.740 NGUID/EUI64 Never Reused: No 00:22:10.740 Namespace Write Protected: No 00:22:10.740 Endurance group ID: 1 00:22:10.740 Number of LBA Formats: 8 00:22:10.740 Current LBA Format: LBA Format #04 00:22:10.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:10.740 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:10.740 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:10.740 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:10.740 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:10.740 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:10.740 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:10.740 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:10.740 00:22:10.740 Get Feature FDP: 00:22:10.740 ================ 00:22:10.740 Enabled: Yes 00:22:10.740 FDP configuration index: 0 00:22:10.740 00:22:10.740 FDP configurations log page 00:22:10.740 =========================== 00:22:10.740 Number of FDP configurations: 1 00:22:10.740 Version: 0 00:22:10.740 Size: 112 00:22:10.740 FDP Configuration Descriptor: 0 00:22:10.740 Descriptor Size: 96 00:22:10.740 Reclaim Group Identifier format: 2 00:22:10.740 FDP Volatile Write Cache: Not Present 00:22:10.740 FDP Configuration: Valid 00:22:10.740 Vendor Specific Size: 0 00:22:10.740 Number of Reclaim Groups: 2 00:22:10.740 Number of Recalim Unit Handles: 8 00:22:10.740 Max Placement Identifiers: 128 00:22:10.740 Number of Namespaces Suppprted: 256 00:22:10.740 Reclaim unit Nominal Size: 6000000 bytes 00:22:10.740 Estimated Reclaim Unit Time Limit: Not Reported 00:22:10.740 RUH Desc #000: RUH Type: Initially Isolated 00:22:10.740 RUH Desc #001: RUH Type: Initially Isolated 00:22:10.740 RUH Desc #002: RUH Type: Initially Isolated 00:22:10.740 RUH Desc #003: RUH Type: Initially Isolated 00:22:10.740 RUH Desc #004: RUH Type: Initially Isolated 00:22:10.740 RUH Desc #005: RUH Type: Initially Isolated 00:22:10.740 RUH Desc #006: RUH Type: Initially Isolated 00:22:10.740 RUH Desc #007: RUH Type: Initially Isolated 00:22:10.740 00:22:10.740 FDP reclaim unit handle usage log page 00:22:10.740 ====================================== 00:22:10.740 Number of Reclaim Unit Handles: 8 00:22:10.740 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:22:10.740 RUH Usage Desc #001: RUH Attributes: Unused 00:22:10.740 RUH Usage Desc #002: RUH Attributes: Unused 00:22:10.740 RUH Usage Desc #003: RUH Attributes: Unused 00:22:10.740 RUH Usage Desc #004: RUH Attributes: Unused 00:22:10.740 RUH Usage Desc #005: RUH Attributes: Unused 00:22:10.740 RUH Usage Desc #006: RUH Attributes: Unused 00:22:10.740 RUH Usage Desc #007: RUH Attributes: Unused 00:22:10.740 00:22:10.740 FDP statistics log page 00:22:10.740 ======================= 00:22:10.740 Host bytes with metadata written: 430940160 00:22:10.740 Media[2024-10-08 18:49:39.206019] nvme_ctrlr.c:3659:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 65176 terminated unexpected 00:22:10.740 bytes with metadata written: 430985216 00:22:10.740 Media bytes erased: 0 00:22:10.740 00:22:10.740 FDP events log page 00:22:10.740 =================== 00:22:10.740 Number of FDP events: 0 00:22:10.740 00:22:10.740 NVM Specific Namespace Data 00:22:10.740 =========================== 00:22:10.740 Logical Block Storage Tag Mask: 0 00:22:10.740 Protection Information Capabilities: 00:22:10.740 16b Guard Protection Information Storage Tag Support: No 00:22:10.740 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:10.740 Storage Tag Check Read Support: No 00:22:10.740 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.740 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.740 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.740 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.740 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.740 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.740 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.740 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.740 ===================================================== 00:22:10.740 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:10.740 ===================================================== 00:22:10.740 Controller Capabilities/Features 00:22:10.740 ================================ 00:22:10.740 Vendor ID: 1b36 00:22:10.740 Subsystem Vendor ID: 1af4 00:22:10.740 Serial Number: 12342 00:22:10.740 Model Number: QEMU NVMe Ctrl 00:22:10.740 Firmware Version: 8.0.0 00:22:10.740 Recommended Arb Burst: 6 00:22:10.740 IEEE OUI Identifier: 00 54 52 00:22:10.740 Multi-path I/O 00:22:10.740 May have multiple subsystem ports: No 00:22:10.740 May have multiple controllers: No 00:22:10.740 Associated with SR-IOV VF: No 00:22:10.740 Max Data Transfer Size: 524288 00:22:10.740 Max Number of Namespaces: 256 00:22:10.740 Max Number of I/O Queues: 64 00:22:10.740 NVMe Specification Version (VS): 1.4 00:22:10.740 NVMe Specification Version (Identify): 1.4 00:22:10.740 Maximum Queue Entries: 2048 00:22:10.740 Contiguous Queues Required: Yes 00:22:10.740 Arbitration Mechanisms Supported 00:22:10.740 Weighted Round Robin: Not Supported 00:22:10.740 Vendor Specific: Not Supported 00:22:10.740 Reset Timeout: 7500 ms 00:22:10.740 Doorbell Stride: 4 bytes 00:22:10.740 NVM Subsystem Reset: Not Supported 00:22:10.740 Command Sets Supported 00:22:10.740 NVM Command Set: Supported 00:22:10.740 Boot Partition: Not Supported 00:22:10.740 Memory Page Size Minimum: 4096 bytes 00:22:10.740 Memory Page Size Maximum: 65536 bytes 00:22:10.740 Persistent Memory Region: Not Supported 00:22:10.740 Optional Asynchronous Events Supported 00:22:10.740 Namespace Attribute Notices: Supported 00:22:10.740 Firmware Activation Notices: Not Supported 00:22:10.740 ANA Change Notices: Not Supported 00:22:10.740 PLE Aggregate Log Change Notices: Not Supported 00:22:10.740 LBA Status Info Alert Notices: Not Supported 00:22:10.740 EGE Aggregate Log Change Notices: Not Supported 00:22:10.740 Normal NVM Subsystem Shutdown event: Not Supported 00:22:10.740 Zone Descriptor Change Notices: Not Supported 00:22:10.740 Discovery Log Change Notices: Not Supported 00:22:10.740 Controller Attributes 00:22:10.740 128-bit Host Identifier: Not Supported 00:22:10.740 Non-Operational Permissive Mode: Not Supported 00:22:10.740 NVM Sets: Not Supported 00:22:10.740 Read Recovery Levels: Not Supported 00:22:10.741 Endurance Groups: Not Supported 00:22:10.741 Predictable Latency Mode: Not Supported 00:22:10.741 Traffic Based Keep ALive: Not Supported 00:22:10.741 Namespace Granularity: Not Supported 00:22:10.741 SQ Associations: Not Supported 00:22:10.741 UUID List: Not Supported 00:22:10.741 Multi-Domain Subsystem: Not Supported 00:22:10.741 Fixed Capacity Management: Not Supported 00:22:10.741 Variable Capacity Management: Not Supported 00:22:10.741 Delete Endurance Group: Not Supported 00:22:10.741 Delete NVM Set: Not Supported 00:22:10.741 Extended LBA Formats Supported: Supported 00:22:10.741 Flexible Data Placement Supported: Not Supported 00:22:10.741 00:22:10.741 Controller Memory Buffer Support 00:22:10.741 ================================ 00:22:10.741 Supported: No 00:22:10.741 00:22:10.741 Persistent Memory Region Support 00:22:10.741 ================================ 00:22:10.741 Supported: No 00:22:10.741 00:22:10.741 Admin Command Set Attributes 00:22:10.741 ============================ 00:22:10.741 Security Send/Receive: Not Supported 00:22:10.741 Format NVM: Supported 00:22:10.741 Firmware Activate/Download: Not Supported 00:22:10.741 Namespace Management: Supported 00:22:10.741 Device Self-Test: Not Supported 00:22:10.741 Directives: Supported 00:22:10.741 NVMe-MI: Not Supported 00:22:10.741 Virtualization Management: Not Supported 00:22:10.741 Doorbell Buffer Config: Supported 00:22:10.741 Get LBA Status Capability: Not Supported 00:22:10.741 Command & Feature Lockdown Capability: Not Supported 00:22:10.741 Abort Command Limit: 4 00:22:10.741 Async Event Request Limit: 4 00:22:10.741 Number of Firmware Slots: N/A 00:22:10.741 Firmware Slot 1 Read-Only: N/A 00:22:10.741 Firmware Activation Without Reset: N/A 00:22:10.741 Multiple Update Detection Support: N/A 00:22:10.741 Firmware Update Granularity: No Information Provided 00:22:10.741 Per-Namespace SMART Log: Yes 00:22:10.741 Asymmetric Namespace Access Log Page: Not Supported 00:22:10.741 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:22:10.741 Command Effects Log Page: Supported 00:22:10.741 Get Log Page Extended Data: Supported 00:22:10.741 Telemetry Log Pages: Not Supported 00:22:10.741 Persistent Event Log Pages: Not Supported 00:22:10.741 Supported Log Pages Log Page: May Support 00:22:10.741 Commands Supported & Effects Log Page: Not Supported 00:22:10.741 Feature Identifiers & Effects Log Page:May Support 00:22:10.741 NVMe-MI Commands & Effects Log Page: May Support 00:22:10.741 Data Area 4 for Telemetry Log: Not Supported 00:22:10.741 Error Log Page Entries Supported: 1 00:22:10.741 Keep Alive: Not Supported 00:22:10.741 00:22:10.741 NVM Command Set Attributes 00:22:10.741 ========================== 00:22:10.741 Submission Queue Entry Size 00:22:10.741 Max: 64 00:22:10.741 Min: 64 00:22:10.741 Completion Queue Entry Size 00:22:10.741 Max: 16 00:22:10.741 Min: 16 00:22:10.741 Number of Namespaces: 256 00:22:10.741 Compare Command: Supported 00:22:10.741 Write Uncorrectable Command: Not Supported 00:22:10.741 Dataset Management Command: Supported 00:22:10.741 Write Zeroes Command: Supported 00:22:10.741 Set Features Save Field: Supported 00:22:10.741 Reservations: Not Supported 00:22:10.741 Timestamp: Supported 00:22:10.741 Copy: Supported 00:22:10.741 Volatile Write Cache: Present 00:22:10.741 Atomic Write Unit (Normal): 1 00:22:10.741 Atomic Write Unit (PFail): 1 00:22:10.741 Atomic Compare & Write Unit: 1 00:22:10.741 Fused Compare & Write: Not Supported 00:22:10.741 Scatter-Gather List 00:22:10.741 SGL Command Set: Supported 00:22:10.741 SGL Keyed: Not Supported 00:22:10.741 SGL Bit Bucket Descriptor: Not Supported 00:22:10.741 SGL Metadata Pointer: Not Supported 00:22:10.741 Oversized SGL: Not Supported 00:22:10.741 SGL Metadata Address: Not Supported 00:22:10.741 SGL Offset: Not Supported 00:22:10.741 Transport SGL Data Block: Not Supported 00:22:10.741 Replay Protected Memory Block: Not Supported 00:22:10.741 00:22:10.741 Firmware Slot Information 00:22:10.741 ========================= 00:22:10.741 Active slot: 1 00:22:10.741 Slot 1 Firmware Revision: 1.0 00:22:10.741 00:22:10.741 00:22:10.741 Commands Supported and Effects 00:22:10.741 ============================== 00:22:10.741 Admin Commands 00:22:10.741 -------------- 00:22:10.741 Delete I/O Submission Queue (00h): Supported 00:22:10.741 Create I/O Submission Queue (01h): Supported 00:22:10.741 Get Log Page (02h): Supported 00:22:10.741 Delete I/O Completion Queue (04h): Supported 00:22:10.741 Create I/O Completion Queue (05h): Supported 00:22:10.741 Identify (06h): Supported 00:22:10.741 Abort (08h): Supported 00:22:10.741 Set Features (09h): Supported 00:22:10.741 Get Features (0Ah): Supported 00:22:10.741 Asynchronous Event Request (0Ch): Supported 00:22:10.741 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:10.741 Directive Send (19h): Supported 00:22:10.741 Directive Receive (1Ah): Supported 00:22:10.741 Virtualization Management (1Ch): Supported 00:22:10.741 Doorbell Buffer Config (7Ch): Supported 00:22:10.741 Format NVM (80h): Supported LBA-Change 00:22:10.741 I/O Commands 00:22:10.741 ------------ 00:22:10.741 Flush (00h): Supported LBA-Change 00:22:10.741 Write (01h): Supported LBA-Change 00:22:10.741 Read (02h): Supported 00:22:10.741 Compare (05h): Supported 00:22:10.741 Write Zeroes (08h): Supported LBA-Change 00:22:10.741 Dataset Management (09h): Supported LBA-Change 00:22:10.741 Unknown (0Ch): Supported 00:22:10.741 Unknown (12h): Supported 00:22:10.741 Copy (19h): Supported LBA-Change 00:22:10.741 Unknown (1Dh): Supported LBA-Change 00:22:10.741 00:22:10.741 Error Log 00:22:10.741 ========= 00:22:10.741 00:22:10.741 Arbitration 00:22:10.741 =========== 00:22:10.741 Arbitration Burst: no limit 00:22:10.741 00:22:10.741 Power Management 00:22:10.741 ================ 00:22:10.741 Number of Power States: 1 00:22:10.741 Current Power State: Power State #0 00:22:10.741 Power State #0: 00:22:10.741 Max Power: 25.00 W 00:22:10.741 Non-Operational State: Operational 00:22:10.741 Entry Latency: 16 microseconds 00:22:10.741 Exit Latency: 4 microseconds 00:22:10.741 Relative Read Throughput: 0 00:22:10.741 Relative Read Latency: 0 00:22:10.741 Relative Write Throughput: 0 00:22:10.741 Relative Write Latency: 0 00:22:10.741 Idle Power: Not Reported 00:22:10.741 Active Power: Not Reported 00:22:10.741 Non-Operational Permissive Mode: Not Supported 00:22:10.741 00:22:10.741 Health Information 00:22:10.741 ================== 00:22:10.741 Critical Warnings: 00:22:10.741 Available Spare Space: OK 00:22:10.741 Temperature: OK 00:22:10.741 Device Reliability: OK 00:22:10.741 Read Only: No 00:22:10.741 Volatile Memory Backup: OK 00:22:10.741 Current Temperature: 323 Kelvin (50 Celsius) 00:22:10.741 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:10.741 Available Spare: 0% 00:22:10.741 Available Spare Threshold: 0% 00:22:10.741 Life Percentage Used: 0% 00:22:10.741 Data Units Read: 1935 00:22:10.741 Data Units Written: 1723 00:22:10.741 Host Read Commands: 93231 00:22:10.741 Host Write Commands: 91511 00:22:10.741 Controller Busy Time: 0 minutes 00:22:10.741 Power Cycles: 0 00:22:10.741 Power On Hours: 0 hours 00:22:10.741 Unsafe Shutdowns: 0 00:22:10.741 Unrecoverable Media Errors: 0 00:22:10.741 Lifetime Error Log Entries: 0 00:22:10.741 Warning Temperature Time: 0 minutes 00:22:10.741 Critical Temperature Time: 0 minutes 00:22:10.741 00:22:10.741 Number of Queues 00:22:10.741 ================ 00:22:10.741 Number of I/O Submission Queues: 64 00:22:10.741 Number of I/O Completion Queues: 64 00:22:10.741 00:22:10.741 ZNS Specific Controller Data 00:22:10.741 ============================ 00:22:10.741 Zone Append Size Limit: 0 00:22:10.741 00:22:10.741 00:22:10.741 Active Namespaces 00:22:10.741 ================= 00:22:10.741 Namespace ID:1 00:22:10.741 Error Recovery Timeout: Unlimited 00:22:10.741 Command Set Identifier: NVM (00h) 00:22:10.741 Deallocate: Supported 00:22:10.741 Deallocated/Unwritten Error: Supported 00:22:10.741 Deallocated Read Value: All 0x00 00:22:10.741 Deallocate in Write Zeroes: Not Supported 00:22:10.741 Deallocated Guard Field: 0xFFFF 00:22:10.741 Flush: Supported 00:22:10.741 Reservation: Not Supported 00:22:10.741 Namespace Sharing Capabilities: Private 00:22:10.741 Size (in LBAs): 1048576 (4GiB) 00:22:10.741 Capacity (in LBAs): 1048576 (4GiB) 00:22:10.741 Utilization (in LBAs): 1048576 (4GiB) 00:22:10.741 Thin Provisioning: Not Supported 00:22:10.741 Per-NS Atomic Units: No 00:22:10.741 Maximum Single Source Range Length: 128 00:22:10.741 Maximum Copy Length: 128 00:22:10.741 Maximum Source Range Count: 128 00:22:10.741 NGUID/EUI64 Never Reused: No 00:22:10.741 Namespace Write Protected: No 00:22:10.741 Number of LBA Formats: 8 00:22:10.741 Current LBA Format: LBA Format #04 00:22:10.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:10.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:10.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:10.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:10.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:10.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:10.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:10.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:10.742 00:22:10.742 NVM Specific Namespace Data 00:22:10.742 =========================== 00:22:10.742 Logical Block Storage Tag Mask: 0 00:22:10.742 Protection Information Capabilities: 00:22:10.742 16b Guard Protection Information Storage Tag Support: No 00:22:10.742 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:10.742 Storage Tag Check Read Support: No 00:22:10.742 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Namespace ID:2 00:22:10.742 Error Recovery Timeout: Unlimited 00:22:10.742 Command Set Identifier: NVM (00h) 00:22:10.742 Deallocate: Supported 00:22:10.742 Deallocated/Unwritten Error: Supported 00:22:10.742 Deallocated Read Value: All 0x00 00:22:10.742 Deallocate in Write Zeroes: Not Supported 00:22:10.742 Deallocated Guard Field: 0xFFFF 00:22:10.742 Flush: Supported 00:22:10.742 Reservation: Not Supported 00:22:10.742 Namespace Sharing Capabilities: Private 00:22:10.742 Size (in LBAs): 1048576 (4GiB) 00:22:10.742 Capacity (in LBAs): 1048576 (4GiB) 00:22:10.742 Utilization (in LBAs): 1048576 (4GiB) 00:22:10.742 Thin Provisioning: Not Supported 00:22:10.742 Per-NS Atomic Units: No 00:22:10.742 Maximum Single Source Range Length: 128 00:22:10.742 Maximum Copy Length: 128 00:22:10.742 Maximum Source Range Count: 128 00:22:10.742 NGUID/EUI64 Never Reused: No 00:22:10.742 Namespace Write Protected: No 00:22:10.742 Number of LBA Formats: 8 00:22:10.742 Current LBA Format: LBA Format #04 00:22:10.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:10.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:10.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:10.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:10.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:10.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:10.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:10.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:10.742 00:22:10.742 NVM Specific Namespace Data 00:22:10.742 =========================== 00:22:10.742 Logical Block Storage Tag Mask: 0 00:22:10.742 Protection Information Capabilities: 00:22:10.742 16b Guard Protection Information Storage Tag Support: No 00:22:10.742 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:10.742 Storage Tag Check Read Support: No 00:22:10.742 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Namespace ID:3 00:22:10.742 Error Recovery Timeout: Unlimited 00:22:10.742 Command Set Identifier: NVM (00h) 00:22:10.742 Deallocate: Supported 00:22:10.742 Deallocated/Unwritten Error: Supported 00:22:10.742 Deallocated Read Value: All 0x00 00:22:10.742 Deallocate in Write Zeroes: Not Supported 00:22:10.742 Deallocated Guard Field: 0xFFFF 00:22:10.742 Flush: Supported 00:22:10.742 Reservation: Not Supported 00:22:10.742 Namespace Sharing Capabilities: Private 00:22:10.742 Size (in LBAs): 1048576 (4GiB) 00:22:10.742 Capacity (in LBAs): 1048576 (4GiB) 00:22:10.742 Utilization (in LBAs): 1048576 (4GiB) 00:22:10.742 Thin Provisioning: Not Supported 00:22:10.742 Per-NS Atomic Units: No 00:22:10.742 Maximum Single Source Range Length: 128 00:22:10.742 Maximum Copy Length: 128 00:22:10.742 Maximum Source Range Count: 128 00:22:10.742 NGUID/EUI64 Never Reused: No 00:22:10.742 Namespace Write Protected: No 00:22:10.742 Number of LBA Formats: 8 00:22:10.742 Current LBA Format: LBA Format #04 00:22:10.742 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:10.742 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:10.742 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:10.742 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:10.742 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:10.742 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:10.742 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:10.742 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:10.742 00:22:10.742 NVM Specific Namespace Data 00:22:10.742 =========================== 00:22:10.742 Logical Block Storage Tag Mask: 0 00:22:10.742 Protection Information Capabilities: 00:22:10.742 16b Guard Protection Information Storage Tag Support: No 00:22:10.742 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:10.742 Storage Tag Check Read Support: No 00:22:10.742 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:10.742 18:49:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:22:10.742 18:49:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:11.001 ===================================================== 00:22:11.001 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:11.001 ===================================================== 00:22:11.001 Controller Capabilities/Features 00:22:11.001 ================================ 00:22:11.001 Vendor ID: 1b36 00:22:11.001 Subsystem Vendor ID: 1af4 00:22:11.001 Serial Number: 12340 00:22:11.001 Model Number: QEMU NVMe Ctrl 00:22:11.001 Firmware Version: 8.0.0 00:22:11.001 Recommended Arb Burst: 6 00:22:11.001 IEEE OUI Identifier: 00 54 52 00:22:11.001 Multi-path I/O 00:22:11.001 May have multiple subsystem ports: No 00:22:11.001 May have multiple controllers: No 00:22:11.001 Associated with SR-IOV VF: No 00:22:11.001 Max Data Transfer Size: 524288 00:22:11.001 Max Number of Namespaces: 256 00:22:11.001 Max Number of I/O Queues: 64 00:22:11.001 NVMe Specification Version (VS): 1.4 00:22:11.001 NVMe Specification Version (Identify): 1.4 00:22:11.001 Maximum Queue Entries: 2048 00:22:11.001 Contiguous Queues Required: Yes 00:22:11.001 Arbitration Mechanisms Supported 00:22:11.001 Weighted Round Robin: Not Supported 00:22:11.001 Vendor Specific: Not Supported 00:22:11.001 Reset Timeout: 7500 ms 00:22:11.001 Doorbell Stride: 4 bytes 00:22:11.001 NVM Subsystem Reset: Not Supported 00:22:11.001 Command Sets Supported 00:22:11.001 NVM Command Set: Supported 00:22:11.001 Boot Partition: Not Supported 00:22:11.001 Memory Page Size Minimum: 4096 bytes 00:22:11.001 Memory Page Size Maximum: 65536 bytes 00:22:11.001 Persistent Memory Region: Not Supported 00:22:11.001 Optional Asynchronous Events Supported 00:22:11.001 Namespace Attribute Notices: Supported 00:22:11.001 Firmware Activation Notices: Not Supported 00:22:11.001 ANA Change Notices: Not Supported 00:22:11.001 PLE Aggregate Log Change Notices: Not Supported 00:22:11.001 LBA Status Info Alert Notices: Not Supported 00:22:11.001 EGE Aggregate Log Change Notices: Not Supported 00:22:11.001 Normal NVM Subsystem Shutdown event: Not Supported 00:22:11.001 Zone Descriptor Change Notices: Not Supported 00:22:11.001 Discovery Log Change Notices: Not Supported 00:22:11.001 Controller Attributes 00:22:11.001 128-bit Host Identifier: Not Supported 00:22:11.001 Non-Operational Permissive Mode: Not Supported 00:22:11.001 NVM Sets: Not Supported 00:22:11.001 Read Recovery Levels: Not Supported 00:22:11.001 Endurance Groups: Not Supported 00:22:11.001 Predictable Latency Mode: Not Supported 00:22:11.001 Traffic Based Keep ALive: Not Supported 00:22:11.001 Namespace Granularity: Not Supported 00:22:11.001 SQ Associations: Not Supported 00:22:11.001 UUID List: Not Supported 00:22:11.001 Multi-Domain Subsystem: Not Supported 00:22:11.001 Fixed Capacity Management: Not Supported 00:22:11.001 Variable Capacity Management: Not Supported 00:22:11.001 Delete Endurance Group: Not Supported 00:22:11.001 Delete NVM Set: Not Supported 00:22:11.001 Extended LBA Formats Supported: Supported 00:22:11.001 Flexible Data Placement Supported: Not Supported 00:22:11.001 00:22:11.001 Controller Memory Buffer Support 00:22:11.001 ================================ 00:22:11.001 Supported: No 00:22:11.001 00:22:11.001 Persistent Memory Region Support 00:22:11.001 ================================ 00:22:11.001 Supported: No 00:22:11.001 00:22:11.001 Admin Command Set Attributes 00:22:11.001 ============================ 00:22:11.001 Security Send/Receive: Not Supported 00:22:11.001 Format NVM: Supported 00:22:11.001 Firmware Activate/Download: Not Supported 00:22:11.001 Namespace Management: Supported 00:22:11.001 Device Self-Test: Not Supported 00:22:11.001 Directives: Supported 00:22:11.001 NVMe-MI: Not Supported 00:22:11.001 Virtualization Management: Not Supported 00:22:11.001 Doorbell Buffer Config: Supported 00:22:11.001 Get LBA Status Capability: Not Supported 00:22:11.001 Command & Feature Lockdown Capability: Not Supported 00:22:11.001 Abort Command Limit: 4 00:22:11.001 Async Event Request Limit: 4 00:22:11.001 Number of Firmware Slots: N/A 00:22:11.001 Firmware Slot 1 Read-Only: N/A 00:22:11.001 Firmware Activation Without Reset: N/A 00:22:11.001 Multiple Update Detection Support: N/A 00:22:11.001 Firmware Update Granularity: No Information Provided 00:22:11.001 Per-Namespace SMART Log: Yes 00:22:11.001 Asymmetric Namespace Access Log Page: Not Supported 00:22:11.001 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:22:11.001 Command Effects Log Page: Supported 00:22:11.001 Get Log Page Extended Data: Supported 00:22:11.001 Telemetry Log Pages: Not Supported 00:22:11.001 Persistent Event Log Pages: Not Supported 00:22:11.001 Supported Log Pages Log Page: May Support 00:22:11.001 Commands Supported & Effects Log Page: Not Supported 00:22:11.001 Feature Identifiers & Effects Log Page:May Support 00:22:11.001 NVMe-MI Commands & Effects Log Page: May Support 00:22:11.001 Data Area 4 for Telemetry Log: Not Supported 00:22:11.001 Error Log Page Entries Supported: 1 00:22:11.001 Keep Alive: Not Supported 00:22:11.001 00:22:11.001 NVM Command Set Attributes 00:22:11.001 ========================== 00:22:11.001 Submission Queue Entry Size 00:22:11.001 Max: 64 00:22:11.001 Min: 64 00:22:11.001 Completion Queue Entry Size 00:22:11.001 Max: 16 00:22:11.001 Min: 16 00:22:11.001 Number of Namespaces: 256 00:22:11.001 Compare Command: Supported 00:22:11.001 Write Uncorrectable Command: Not Supported 00:22:11.001 Dataset Management Command: Supported 00:22:11.001 Write Zeroes Command: Supported 00:22:11.001 Set Features Save Field: Supported 00:22:11.001 Reservations: Not Supported 00:22:11.001 Timestamp: Supported 00:22:11.001 Copy: Supported 00:22:11.001 Volatile Write Cache: Present 00:22:11.001 Atomic Write Unit (Normal): 1 00:22:11.001 Atomic Write Unit (PFail): 1 00:22:11.001 Atomic Compare & Write Unit: 1 00:22:11.001 Fused Compare & Write: Not Supported 00:22:11.001 Scatter-Gather List 00:22:11.001 SGL Command Set: Supported 00:22:11.001 SGL Keyed: Not Supported 00:22:11.001 SGL Bit Bucket Descriptor: Not Supported 00:22:11.001 SGL Metadata Pointer: Not Supported 00:22:11.001 Oversized SGL: Not Supported 00:22:11.001 SGL Metadata Address: Not Supported 00:22:11.001 SGL Offset: Not Supported 00:22:11.001 Transport SGL Data Block: Not Supported 00:22:11.001 Replay Protected Memory Block: Not Supported 00:22:11.001 00:22:11.001 Firmware Slot Information 00:22:11.001 ========================= 00:22:11.001 Active slot: 1 00:22:11.001 Slot 1 Firmware Revision: 1.0 00:22:11.001 00:22:11.001 00:22:11.001 Commands Supported and Effects 00:22:11.001 ============================== 00:22:11.001 Admin Commands 00:22:11.001 -------------- 00:22:11.001 Delete I/O Submission Queue (00h): Supported 00:22:11.001 Create I/O Submission Queue (01h): Supported 00:22:11.001 Get Log Page (02h): Supported 00:22:11.001 Delete I/O Completion Queue (04h): Supported 00:22:11.001 Create I/O Completion Queue (05h): Supported 00:22:11.001 Identify (06h): Supported 00:22:11.001 Abort (08h): Supported 00:22:11.001 Set Features (09h): Supported 00:22:11.001 Get Features (0Ah): Supported 00:22:11.002 Asynchronous Event Request (0Ch): Supported 00:22:11.002 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:11.002 Directive Send (19h): Supported 00:22:11.002 Directive Receive (1Ah): Supported 00:22:11.002 Virtualization Management (1Ch): Supported 00:22:11.002 Doorbell Buffer Config (7Ch): Supported 00:22:11.002 Format NVM (80h): Supported LBA-Change 00:22:11.002 I/O Commands 00:22:11.002 ------------ 00:22:11.002 Flush (00h): Supported LBA-Change 00:22:11.002 Write (01h): Supported LBA-Change 00:22:11.002 Read (02h): Supported 00:22:11.002 Compare (05h): Supported 00:22:11.002 Write Zeroes (08h): Supported LBA-Change 00:22:11.002 Dataset Management (09h): Supported LBA-Change 00:22:11.002 Unknown (0Ch): Supported 00:22:11.002 Unknown (12h): Supported 00:22:11.002 Copy (19h): Supported LBA-Change 00:22:11.002 Unknown (1Dh): Supported LBA-Change 00:22:11.002 00:22:11.002 Error Log 00:22:11.002 ========= 00:22:11.002 00:22:11.002 Arbitration 00:22:11.002 =========== 00:22:11.002 Arbitration Burst: no limit 00:22:11.002 00:22:11.002 Power Management 00:22:11.002 ================ 00:22:11.002 Number of Power States: 1 00:22:11.002 Current Power State: Power State #0 00:22:11.002 Power State #0: 00:22:11.002 Max Power: 25.00 W 00:22:11.002 Non-Operational State: Operational 00:22:11.002 Entry Latency: 16 microseconds 00:22:11.002 Exit Latency: 4 microseconds 00:22:11.002 Relative Read Throughput: 0 00:22:11.002 Relative Read Latency: 0 00:22:11.002 Relative Write Throughput: 0 00:22:11.002 Relative Write Latency: 0 00:22:11.002 Idle Power: Not Reported 00:22:11.002 Active Power: Not Reported 00:22:11.002 Non-Operational Permissive Mode: Not Supported 00:22:11.002 00:22:11.002 Health Information 00:22:11.002 ================== 00:22:11.002 Critical Warnings: 00:22:11.002 Available Spare Space: OK 00:22:11.002 Temperature: OK 00:22:11.002 Device Reliability: OK 00:22:11.002 Read Only: No 00:22:11.002 Volatile Memory Backup: OK 00:22:11.002 Current Temperature: 323 Kelvin (50 Celsius) 00:22:11.002 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:11.002 Available Spare: 0% 00:22:11.002 Available Spare Threshold: 0% 00:22:11.002 Life Percentage Used: 0% 00:22:11.002 Data Units Read: 605 00:22:11.002 Data Units Written: 533 00:22:11.002 Host Read Commands: 30527 00:22:11.002 Host Write Commands: 30329 00:22:11.002 Controller Busy Time: 0 minutes 00:22:11.002 Power Cycles: 0 00:22:11.002 Power On Hours: 0 hours 00:22:11.002 Unsafe Shutdowns: 0 00:22:11.002 Unrecoverable Media Errors: 0 00:22:11.002 Lifetime Error Log Entries: 0 00:22:11.002 Warning Temperature Time: 0 minutes 00:22:11.002 Critical Temperature Time: 0 minutes 00:22:11.002 00:22:11.002 Number of Queues 00:22:11.002 ================ 00:22:11.002 Number of I/O Submission Queues: 64 00:22:11.002 Number of I/O Completion Queues: 64 00:22:11.002 00:22:11.002 ZNS Specific Controller Data 00:22:11.002 ============================ 00:22:11.002 Zone Append Size Limit: 0 00:22:11.002 00:22:11.002 00:22:11.002 Active Namespaces 00:22:11.002 ================= 00:22:11.002 Namespace ID:1 00:22:11.002 Error Recovery Timeout: Unlimited 00:22:11.002 Command Set Identifier: NVM (00h) 00:22:11.002 Deallocate: Supported 00:22:11.002 Deallocated/Unwritten Error: Supported 00:22:11.002 Deallocated Read Value: All 0x00 00:22:11.002 Deallocate in Write Zeroes: Not Supported 00:22:11.002 Deallocated Guard Field: 0xFFFF 00:22:11.002 Flush: Supported 00:22:11.002 Reservation: Not Supported 00:22:11.002 Metadata Transferred as: Separate Metadata Buffer 00:22:11.002 Namespace Sharing Capabilities: Private 00:22:11.002 Size (in LBAs): 1548666 (5GiB) 00:22:11.002 Capacity (in LBAs): 1548666 (5GiB) 00:22:11.002 Utilization (in LBAs): 1548666 (5GiB) 00:22:11.002 Thin Provisioning: Not Supported 00:22:11.002 Per-NS Atomic Units: No 00:22:11.002 Maximum Single Source Range Length: 128 00:22:11.002 Maximum Copy Length: 128 00:22:11.002 Maximum Source Range Count: 128 00:22:11.002 NGUID/EUI64 Never Reused: No 00:22:11.002 Namespace Write Protected: No 00:22:11.002 Number of LBA Formats: 8 00:22:11.002 Current LBA Format: LBA Format #07 00:22:11.002 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:11.002 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:11.002 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:11.002 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:11.002 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:11.002 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:11.002 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:11.002 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:11.002 00:22:11.002 NVM Specific Namespace Data 00:22:11.002 =========================== 00:22:11.002 Logical Block Storage Tag Mask: 0 00:22:11.002 Protection Information Capabilities: 00:22:11.002 16b Guard Protection Information Storage Tag Support: No 00:22:11.002 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:11.002 Storage Tag Check Read Support: No 00:22:11.002 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.002 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.002 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.002 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.002 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.002 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.002 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.002 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.002 18:49:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:22:11.002 18:49:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:22:11.260 ===================================================== 00:22:11.260 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:11.260 ===================================================== 00:22:11.260 Controller Capabilities/Features 00:22:11.260 ================================ 00:22:11.260 Vendor ID: 1b36 00:22:11.260 Subsystem Vendor ID: 1af4 00:22:11.260 Serial Number: 12341 00:22:11.260 Model Number: QEMU NVMe Ctrl 00:22:11.260 Firmware Version: 8.0.0 00:22:11.260 Recommended Arb Burst: 6 00:22:11.260 IEEE OUI Identifier: 00 54 52 00:22:11.260 Multi-path I/O 00:22:11.260 May have multiple subsystem ports: No 00:22:11.260 May have multiple controllers: No 00:22:11.260 Associated with SR-IOV VF: No 00:22:11.260 Max Data Transfer Size: 524288 00:22:11.260 Max Number of Namespaces: 256 00:22:11.260 Max Number of I/O Queues: 64 00:22:11.260 NVMe Specification Version (VS): 1.4 00:22:11.260 NVMe Specification Version (Identify): 1.4 00:22:11.260 Maximum Queue Entries: 2048 00:22:11.260 Contiguous Queues Required: Yes 00:22:11.260 Arbitration Mechanisms Supported 00:22:11.260 Weighted Round Robin: Not Supported 00:22:11.260 Vendor Specific: Not Supported 00:22:11.260 Reset Timeout: 7500 ms 00:22:11.260 Doorbell Stride: 4 bytes 00:22:11.260 NVM Subsystem Reset: Not Supported 00:22:11.260 Command Sets Supported 00:22:11.260 NVM Command Set: Supported 00:22:11.260 Boot Partition: Not Supported 00:22:11.261 Memory Page Size Minimum: 4096 bytes 00:22:11.261 Memory Page Size Maximum: 65536 bytes 00:22:11.261 Persistent Memory Region: Not Supported 00:22:11.261 Optional Asynchronous Events Supported 00:22:11.261 Namespace Attribute Notices: Supported 00:22:11.261 Firmware Activation Notices: Not Supported 00:22:11.261 ANA Change Notices: Not Supported 00:22:11.261 PLE Aggregate Log Change Notices: Not Supported 00:22:11.261 LBA Status Info Alert Notices: Not Supported 00:22:11.261 EGE Aggregate Log Change Notices: Not Supported 00:22:11.261 Normal NVM Subsystem Shutdown event: Not Supported 00:22:11.261 Zone Descriptor Change Notices: Not Supported 00:22:11.261 Discovery Log Change Notices: Not Supported 00:22:11.261 Controller Attributes 00:22:11.261 128-bit Host Identifier: Not Supported 00:22:11.261 Non-Operational Permissive Mode: Not Supported 00:22:11.261 NVM Sets: Not Supported 00:22:11.261 Read Recovery Levels: Not Supported 00:22:11.261 Endurance Groups: Not Supported 00:22:11.261 Predictable Latency Mode: Not Supported 00:22:11.261 Traffic Based Keep ALive: Not Supported 00:22:11.261 Namespace Granularity: Not Supported 00:22:11.261 SQ Associations: Not Supported 00:22:11.261 UUID List: Not Supported 00:22:11.261 Multi-Domain Subsystem: Not Supported 00:22:11.261 Fixed Capacity Management: Not Supported 00:22:11.261 Variable Capacity Management: Not Supported 00:22:11.261 Delete Endurance Group: Not Supported 00:22:11.261 Delete NVM Set: Not Supported 00:22:11.261 Extended LBA Formats Supported: Supported 00:22:11.261 Flexible Data Placement Supported: Not Supported 00:22:11.261 00:22:11.261 Controller Memory Buffer Support 00:22:11.261 ================================ 00:22:11.261 Supported: No 00:22:11.261 00:22:11.261 Persistent Memory Region Support 00:22:11.261 ================================ 00:22:11.261 Supported: No 00:22:11.261 00:22:11.261 Admin Command Set Attributes 00:22:11.261 ============================ 00:22:11.261 Security Send/Receive: Not Supported 00:22:11.261 Format NVM: Supported 00:22:11.261 Firmware Activate/Download: Not Supported 00:22:11.261 Namespace Management: Supported 00:22:11.261 Device Self-Test: Not Supported 00:22:11.261 Directives: Supported 00:22:11.261 NVMe-MI: Not Supported 00:22:11.261 Virtualization Management: Not Supported 00:22:11.261 Doorbell Buffer Config: Supported 00:22:11.261 Get LBA Status Capability: Not Supported 00:22:11.261 Command & Feature Lockdown Capability: Not Supported 00:22:11.261 Abort Command Limit: 4 00:22:11.261 Async Event Request Limit: 4 00:22:11.261 Number of Firmware Slots: N/A 00:22:11.261 Firmware Slot 1 Read-Only: N/A 00:22:11.261 Firmware Activation Without Reset: N/A 00:22:11.261 Multiple Update Detection Support: N/A 00:22:11.261 Firmware Update Granularity: No Information Provided 00:22:11.261 Per-Namespace SMART Log: Yes 00:22:11.261 Asymmetric Namespace Access Log Page: Not Supported 00:22:11.261 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:22:11.261 Command Effects Log Page: Supported 00:22:11.261 Get Log Page Extended Data: Supported 00:22:11.261 Telemetry Log Pages: Not Supported 00:22:11.261 Persistent Event Log Pages: Not Supported 00:22:11.261 Supported Log Pages Log Page: May Support 00:22:11.261 Commands Supported & Effects Log Page: Not Supported 00:22:11.261 Feature Identifiers & Effects Log Page:May Support 00:22:11.261 NVMe-MI Commands & Effects Log Page: May Support 00:22:11.261 Data Area 4 for Telemetry Log: Not Supported 00:22:11.261 Error Log Page Entries Supported: 1 00:22:11.261 Keep Alive: Not Supported 00:22:11.261 00:22:11.261 NVM Command Set Attributes 00:22:11.261 ========================== 00:22:11.261 Submission Queue Entry Size 00:22:11.261 Max: 64 00:22:11.261 Min: 64 00:22:11.261 Completion Queue Entry Size 00:22:11.261 Max: 16 00:22:11.261 Min: 16 00:22:11.261 Number of Namespaces: 256 00:22:11.261 Compare Command: Supported 00:22:11.261 Write Uncorrectable Command: Not Supported 00:22:11.261 Dataset Management Command: Supported 00:22:11.261 Write Zeroes Command: Supported 00:22:11.261 Set Features Save Field: Supported 00:22:11.261 Reservations: Not Supported 00:22:11.261 Timestamp: Supported 00:22:11.261 Copy: Supported 00:22:11.261 Volatile Write Cache: Present 00:22:11.261 Atomic Write Unit (Normal): 1 00:22:11.261 Atomic Write Unit (PFail): 1 00:22:11.261 Atomic Compare & Write Unit: 1 00:22:11.261 Fused Compare & Write: Not Supported 00:22:11.261 Scatter-Gather List 00:22:11.261 SGL Command Set: Supported 00:22:11.261 SGL Keyed: Not Supported 00:22:11.261 SGL Bit Bucket Descriptor: Not Supported 00:22:11.261 SGL Metadata Pointer: Not Supported 00:22:11.261 Oversized SGL: Not Supported 00:22:11.261 SGL Metadata Address: Not Supported 00:22:11.261 SGL Offset: Not Supported 00:22:11.261 Transport SGL Data Block: Not Supported 00:22:11.261 Replay Protected Memory Block: Not Supported 00:22:11.261 00:22:11.261 Firmware Slot Information 00:22:11.261 ========================= 00:22:11.261 Active slot: 1 00:22:11.261 Slot 1 Firmware Revision: 1.0 00:22:11.261 00:22:11.261 00:22:11.261 Commands Supported and Effects 00:22:11.261 ============================== 00:22:11.261 Admin Commands 00:22:11.261 -------------- 00:22:11.261 Delete I/O Submission Queue (00h): Supported 00:22:11.261 Create I/O Submission Queue (01h): Supported 00:22:11.261 Get Log Page (02h): Supported 00:22:11.261 Delete I/O Completion Queue (04h): Supported 00:22:11.261 Create I/O Completion Queue (05h): Supported 00:22:11.261 Identify (06h): Supported 00:22:11.261 Abort (08h): Supported 00:22:11.261 Set Features (09h): Supported 00:22:11.261 Get Features (0Ah): Supported 00:22:11.261 Asynchronous Event Request (0Ch): Supported 00:22:11.261 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:11.261 Directive Send (19h): Supported 00:22:11.261 Directive Receive (1Ah): Supported 00:22:11.261 Virtualization Management (1Ch): Supported 00:22:11.261 Doorbell Buffer Config (7Ch): Supported 00:22:11.261 Format NVM (80h): Supported LBA-Change 00:22:11.261 I/O Commands 00:22:11.261 ------------ 00:22:11.261 Flush (00h): Supported LBA-Change 00:22:11.261 Write (01h): Supported LBA-Change 00:22:11.261 Read (02h): Supported 00:22:11.261 Compare (05h): Supported 00:22:11.261 Write Zeroes (08h): Supported LBA-Change 00:22:11.261 Dataset Management (09h): Supported LBA-Change 00:22:11.261 Unknown (0Ch): Supported 00:22:11.261 Unknown (12h): Supported 00:22:11.261 Copy (19h): Supported LBA-Change 00:22:11.261 Unknown (1Dh): Supported LBA-Change 00:22:11.261 00:22:11.261 Error Log 00:22:11.261 ========= 00:22:11.261 00:22:11.261 Arbitration 00:22:11.261 =========== 00:22:11.261 Arbitration Burst: no limit 00:22:11.261 00:22:11.261 Power Management 00:22:11.261 ================ 00:22:11.261 Number of Power States: 1 00:22:11.261 Current Power State: Power State #0 00:22:11.261 Power State #0: 00:22:11.261 Max Power: 25.00 W 00:22:11.261 Non-Operational State: Operational 00:22:11.261 Entry Latency: 16 microseconds 00:22:11.261 Exit Latency: 4 microseconds 00:22:11.261 Relative Read Throughput: 0 00:22:11.261 Relative Read Latency: 0 00:22:11.261 Relative Write Throughput: 0 00:22:11.261 Relative Write Latency: 0 00:22:11.520 Idle Power: Not Reported 00:22:11.520 Active Power: Not Reported 00:22:11.520 Non-Operational Permissive Mode: Not Supported 00:22:11.520 00:22:11.520 Health Information 00:22:11.520 ================== 00:22:11.520 Critical Warnings: 00:22:11.520 Available Spare Space: OK 00:22:11.520 Temperature: OK 00:22:11.520 Device Reliability: OK 00:22:11.520 Read Only: No 00:22:11.520 Volatile Memory Backup: OK 00:22:11.520 Current Temperature: 323 Kelvin (50 Celsius) 00:22:11.520 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:11.520 Available Spare: 0% 00:22:11.520 Available Spare Threshold: 0% 00:22:11.520 Life Percentage Used: 0% 00:22:11.520 Data Units Read: 903 00:22:11.520 Data Units Written: 763 00:22:11.520 Host Read Commands: 45418 00:22:11.520 Host Write Commands: 44129 00:22:11.520 Controller Busy Time: 0 minutes 00:22:11.520 Power Cycles: 0 00:22:11.520 Power On Hours: 0 hours 00:22:11.520 Unsafe Shutdowns: 0 00:22:11.520 Unrecoverable Media Errors: 0 00:22:11.520 Lifetime Error Log Entries: 0 00:22:11.520 Warning Temperature Time: 0 minutes 00:22:11.520 Critical Temperature Time: 0 minutes 00:22:11.520 00:22:11.520 Number of Queues 00:22:11.520 ================ 00:22:11.520 Number of I/O Submission Queues: 64 00:22:11.520 Number of I/O Completion Queues: 64 00:22:11.520 00:22:11.520 ZNS Specific Controller Data 00:22:11.520 ============================ 00:22:11.520 Zone Append Size Limit: 0 00:22:11.520 00:22:11.520 00:22:11.520 Active Namespaces 00:22:11.520 ================= 00:22:11.520 Namespace ID:1 00:22:11.520 Error Recovery Timeout: Unlimited 00:22:11.520 Command Set Identifier: NVM (00h) 00:22:11.520 Deallocate: Supported 00:22:11.520 Deallocated/Unwritten Error: Supported 00:22:11.520 Deallocated Read Value: All 0x00 00:22:11.520 Deallocate in Write Zeroes: Not Supported 00:22:11.520 Deallocated Guard Field: 0xFFFF 00:22:11.520 Flush: Supported 00:22:11.520 Reservation: Not Supported 00:22:11.520 Namespace Sharing Capabilities: Private 00:22:11.520 Size (in LBAs): 1310720 (5GiB) 00:22:11.520 Capacity (in LBAs): 1310720 (5GiB) 00:22:11.520 Utilization (in LBAs): 1310720 (5GiB) 00:22:11.520 Thin Provisioning: Not Supported 00:22:11.520 Per-NS Atomic Units: No 00:22:11.520 Maximum Single Source Range Length: 128 00:22:11.520 Maximum Copy Length: 128 00:22:11.520 Maximum Source Range Count: 128 00:22:11.520 NGUID/EUI64 Never Reused: No 00:22:11.520 Namespace Write Protected: No 00:22:11.520 Number of LBA Formats: 8 00:22:11.520 Current LBA Format: LBA Format #04 00:22:11.520 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:11.520 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:11.520 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:11.520 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:11.520 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:11.520 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:11.520 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:11.520 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:11.520 00:22:11.520 NVM Specific Namespace Data 00:22:11.520 =========================== 00:22:11.520 Logical Block Storage Tag Mask: 0 00:22:11.520 Protection Information Capabilities: 00:22:11.520 16b Guard Protection Information Storage Tag Support: No 00:22:11.520 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:11.520 Storage Tag Check Read Support: No 00:22:11.520 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.520 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.520 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.520 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.520 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.520 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.520 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.520 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.520 18:49:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:22:11.520 18:49:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:22:11.780 ===================================================== 00:22:11.780 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:11.780 ===================================================== 00:22:11.780 Controller Capabilities/Features 00:22:11.780 ================================ 00:22:11.780 Vendor ID: 1b36 00:22:11.780 Subsystem Vendor ID: 1af4 00:22:11.780 Serial Number: 12342 00:22:11.780 Model Number: QEMU NVMe Ctrl 00:22:11.780 Firmware Version: 8.0.0 00:22:11.780 Recommended Arb Burst: 6 00:22:11.780 IEEE OUI Identifier: 00 54 52 00:22:11.780 Multi-path I/O 00:22:11.780 May have multiple subsystem ports: No 00:22:11.780 May have multiple controllers: No 00:22:11.780 Associated with SR-IOV VF: No 00:22:11.780 Max Data Transfer Size: 524288 00:22:11.780 Max Number of Namespaces: 256 00:22:11.780 Max Number of I/O Queues: 64 00:22:11.780 NVMe Specification Version (VS): 1.4 00:22:11.780 NVMe Specification Version (Identify): 1.4 00:22:11.780 Maximum Queue Entries: 2048 00:22:11.780 Contiguous Queues Required: Yes 00:22:11.780 Arbitration Mechanisms Supported 00:22:11.780 Weighted Round Robin: Not Supported 00:22:11.780 Vendor Specific: Not Supported 00:22:11.780 Reset Timeout: 7500 ms 00:22:11.780 Doorbell Stride: 4 bytes 00:22:11.780 NVM Subsystem Reset: Not Supported 00:22:11.780 Command Sets Supported 00:22:11.780 NVM Command Set: Supported 00:22:11.780 Boot Partition: Not Supported 00:22:11.780 Memory Page Size Minimum: 4096 bytes 00:22:11.780 Memory Page Size Maximum: 65536 bytes 00:22:11.780 Persistent Memory Region: Not Supported 00:22:11.780 Optional Asynchronous Events Supported 00:22:11.780 Namespace Attribute Notices: Supported 00:22:11.780 Firmware Activation Notices: Not Supported 00:22:11.780 ANA Change Notices: Not Supported 00:22:11.780 PLE Aggregate Log Change Notices: Not Supported 00:22:11.780 LBA Status Info Alert Notices: Not Supported 00:22:11.780 EGE Aggregate Log Change Notices: Not Supported 00:22:11.780 Normal NVM Subsystem Shutdown event: Not Supported 00:22:11.780 Zone Descriptor Change Notices: Not Supported 00:22:11.780 Discovery Log Change Notices: Not Supported 00:22:11.780 Controller Attributes 00:22:11.780 128-bit Host Identifier: Not Supported 00:22:11.780 Non-Operational Permissive Mode: Not Supported 00:22:11.780 NVM Sets: Not Supported 00:22:11.780 Read Recovery Levels: Not Supported 00:22:11.780 Endurance Groups: Not Supported 00:22:11.780 Predictable Latency Mode: Not Supported 00:22:11.780 Traffic Based Keep ALive: Not Supported 00:22:11.780 Namespace Granularity: Not Supported 00:22:11.780 SQ Associations: Not Supported 00:22:11.780 UUID List: Not Supported 00:22:11.780 Multi-Domain Subsystem: Not Supported 00:22:11.780 Fixed Capacity Management: Not Supported 00:22:11.780 Variable Capacity Management: Not Supported 00:22:11.780 Delete Endurance Group: Not Supported 00:22:11.780 Delete NVM Set: Not Supported 00:22:11.780 Extended LBA Formats Supported: Supported 00:22:11.780 Flexible Data Placement Supported: Not Supported 00:22:11.780 00:22:11.780 Controller Memory Buffer Support 00:22:11.780 ================================ 00:22:11.780 Supported: No 00:22:11.780 00:22:11.780 Persistent Memory Region Support 00:22:11.780 ================================ 00:22:11.780 Supported: No 00:22:11.780 00:22:11.780 Admin Command Set Attributes 00:22:11.780 ============================ 00:22:11.780 Security Send/Receive: Not Supported 00:22:11.780 Format NVM: Supported 00:22:11.780 Firmware Activate/Download: Not Supported 00:22:11.780 Namespace Management: Supported 00:22:11.780 Device Self-Test: Not Supported 00:22:11.780 Directives: Supported 00:22:11.780 NVMe-MI: Not Supported 00:22:11.780 Virtualization Management: Not Supported 00:22:11.780 Doorbell Buffer Config: Supported 00:22:11.780 Get LBA Status Capability: Not Supported 00:22:11.780 Command & Feature Lockdown Capability: Not Supported 00:22:11.780 Abort Command Limit: 4 00:22:11.780 Async Event Request Limit: 4 00:22:11.780 Number of Firmware Slots: N/A 00:22:11.780 Firmware Slot 1 Read-Only: N/A 00:22:11.780 Firmware Activation Without Reset: N/A 00:22:11.780 Multiple Update Detection Support: N/A 00:22:11.780 Firmware Update Granularity: No Information Provided 00:22:11.780 Per-Namespace SMART Log: Yes 00:22:11.780 Asymmetric Namespace Access Log Page: Not Supported 00:22:11.780 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:22:11.780 Command Effects Log Page: Supported 00:22:11.780 Get Log Page Extended Data: Supported 00:22:11.780 Telemetry Log Pages: Not Supported 00:22:11.780 Persistent Event Log Pages: Not Supported 00:22:11.780 Supported Log Pages Log Page: May Support 00:22:11.780 Commands Supported & Effects Log Page: Not Supported 00:22:11.780 Feature Identifiers & Effects Log Page:May Support 00:22:11.780 NVMe-MI Commands & Effects Log Page: May Support 00:22:11.780 Data Area 4 for Telemetry Log: Not Supported 00:22:11.780 Error Log Page Entries Supported: 1 00:22:11.780 Keep Alive: Not Supported 00:22:11.780 00:22:11.780 NVM Command Set Attributes 00:22:11.780 ========================== 00:22:11.780 Submission Queue Entry Size 00:22:11.780 Max: 64 00:22:11.780 Min: 64 00:22:11.780 Completion Queue Entry Size 00:22:11.780 Max: 16 00:22:11.780 Min: 16 00:22:11.780 Number of Namespaces: 256 00:22:11.780 Compare Command: Supported 00:22:11.780 Write Uncorrectable Command: Not Supported 00:22:11.780 Dataset Management Command: Supported 00:22:11.780 Write Zeroes Command: Supported 00:22:11.780 Set Features Save Field: Supported 00:22:11.780 Reservations: Not Supported 00:22:11.780 Timestamp: Supported 00:22:11.780 Copy: Supported 00:22:11.780 Volatile Write Cache: Present 00:22:11.780 Atomic Write Unit (Normal): 1 00:22:11.780 Atomic Write Unit (PFail): 1 00:22:11.780 Atomic Compare & Write Unit: 1 00:22:11.780 Fused Compare & Write: Not Supported 00:22:11.780 Scatter-Gather List 00:22:11.780 SGL Command Set: Supported 00:22:11.780 SGL Keyed: Not Supported 00:22:11.780 SGL Bit Bucket Descriptor: Not Supported 00:22:11.780 SGL Metadata Pointer: Not Supported 00:22:11.780 Oversized SGL: Not Supported 00:22:11.780 SGL Metadata Address: Not Supported 00:22:11.780 SGL Offset: Not Supported 00:22:11.780 Transport SGL Data Block: Not Supported 00:22:11.780 Replay Protected Memory Block: Not Supported 00:22:11.780 00:22:11.780 Firmware Slot Information 00:22:11.780 ========================= 00:22:11.780 Active slot: 1 00:22:11.780 Slot 1 Firmware Revision: 1.0 00:22:11.780 00:22:11.780 00:22:11.780 Commands Supported and Effects 00:22:11.780 ============================== 00:22:11.780 Admin Commands 00:22:11.780 -------------- 00:22:11.780 Delete I/O Submission Queue (00h): Supported 00:22:11.780 Create I/O Submission Queue (01h): Supported 00:22:11.780 Get Log Page (02h): Supported 00:22:11.780 Delete I/O Completion Queue (04h): Supported 00:22:11.780 Create I/O Completion Queue (05h): Supported 00:22:11.780 Identify (06h): Supported 00:22:11.780 Abort (08h): Supported 00:22:11.780 Set Features (09h): Supported 00:22:11.780 Get Features (0Ah): Supported 00:22:11.780 Asynchronous Event Request (0Ch): Supported 00:22:11.780 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:11.780 Directive Send (19h): Supported 00:22:11.780 Directive Receive (1Ah): Supported 00:22:11.780 Virtualization Management (1Ch): Supported 00:22:11.780 Doorbell Buffer Config (7Ch): Supported 00:22:11.780 Format NVM (80h): Supported LBA-Change 00:22:11.780 I/O Commands 00:22:11.780 ------------ 00:22:11.780 Flush (00h): Supported LBA-Change 00:22:11.780 Write (01h): Supported LBA-Change 00:22:11.780 Read (02h): Supported 00:22:11.780 Compare (05h): Supported 00:22:11.780 Write Zeroes (08h): Supported LBA-Change 00:22:11.780 Dataset Management (09h): Supported LBA-Change 00:22:11.780 Unknown (0Ch): Supported 00:22:11.780 Unknown (12h): Supported 00:22:11.780 Copy (19h): Supported LBA-Change 00:22:11.780 Unknown (1Dh): Supported LBA-Change 00:22:11.780 00:22:11.780 Error Log 00:22:11.780 ========= 00:22:11.780 00:22:11.780 Arbitration 00:22:11.780 =========== 00:22:11.780 Arbitration Burst: no limit 00:22:11.780 00:22:11.780 Power Management 00:22:11.780 ================ 00:22:11.780 Number of Power States: 1 00:22:11.780 Current Power State: Power State #0 00:22:11.780 Power State #0: 00:22:11.780 Max Power: 25.00 W 00:22:11.780 Non-Operational State: Operational 00:22:11.781 Entry Latency: 16 microseconds 00:22:11.781 Exit Latency: 4 microseconds 00:22:11.781 Relative Read Throughput: 0 00:22:11.781 Relative Read Latency: 0 00:22:11.781 Relative Write Throughput: 0 00:22:11.781 Relative Write Latency: 0 00:22:11.781 Idle Power: Not Reported 00:22:11.781 Active Power: Not Reported 00:22:11.781 Non-Operational Permissive Mode: Not Supported 00:22:11.781 00:22:11.781 Health Information 00:22:11.781 ================== 00:22:11.781 Critical Warnings: 00:22:11.781 Available Spare Space: OK 00:22:11.781 Temperature: OK 00:22:11.781 Device Reliability: OK 00:22:11.781 Read Only: No 00:22:11.781 Volatile Memory Backup: OK 00:22:11.781 Current Temperature: 323 Kelvin (50 Celsius) 00:22:11.781 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:11.781 Available Spare: 0% 00:22:11.781 Available Spare Threshold: 0% 00:22:11.781 Life Percentage Used: 0% 00:22:11.781 Data Units Read: 1935 00:22:11.781 Data Units Written: 1723 00:22:11.781 Host Read Commands: 93231 00:22:11.781 Host Write Commands: 91511 00:22:11.781 Controller Busy Time: 0 minutes 00:22:11.781 Power Cycles: 0 00:22:11.781 Power On Hours: 0 hours 00:22:11.781 Unsafe Shutdowns: 0 00:22:11.781 Unrecoverable Media Errors: 0 00:22:11.781 Lifetime Error Log Entries: 0 00:22:11.781 Warning Temperature Time: 0 minutes 00:22:11.781 Critical Temperature Time: 0 minutes 00:22:11.781 00:22:11.781 Number of Queues 00:22:11.781 ================ 00:22:11.781 Number of I/O Submission Queues: 64 00:22:11.781 Number of I/O Completion Queues: 64 00:22:11.781 00:22:11.781 ZNS Specific Controller Data 00:22:11.781 ============================ 00:22:11.781 Zone Append Size Limit: 0 00:22:11.781 00:22:11.781 00:22:11.781 Active Namespaces 00:22:11.781 ================= 00:22:11.781 Namespace ID:1 00:22:11.781 Error Recovery Timeout: Unlimited 00:22:11.781 Command Set Identifier: NVM (00h) 00:22:11.781 Deallocate: Supported 00:22:11.781 Deallocated/Unwritten Error: Supported 00:22:11.781 Deallocated Read Value: All 0x00 00:22:11.781 Deallocate in Write Zeroes: Not Supported 00:22:11.781 Deallocated Guard Field: 0xFFFF 00:22:11.781 Flush: Supported 00:22:11.781 Reservation: Not Supported 00:22:11.781 Namespace Sharing Capabilities: Private 00:22:11.781 Size (in LBAs): 1048576 (4GiB) 00:22:11.781 Capacity (in LBAs): 1048576 (4GiB) 00:22:11.781 Utilization (in LBAs): 1048576 (4GiB) 00:22:11.781 Thin Provisioning: Not Supported 00:22:11.781 Per-NS Atomic Units: No 00:22:11.781 Maximum Single Source Range Length: 128 00:22:11.781 Maximum Copy Length: 128 00:22:11.781 Maximum Source Range Count: 128 00:22:11.781 NGUID/EUI64 Never Reused: No 00:22:11.781 Namespace Write Protected: No 00:22:11.781 Number of LBA Formats: 8 00:22:11.781 Current LBA Format: LBA Format #04 00:22:11.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:11.781 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:11.781 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:11.781 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:11.781 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:11.781 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:11.781 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:11.781 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:11.781 00:22:11.781 NVM Specific Namespace Data 00:22:11.781 =========================== 00:22:11.781 Logical Block Storage Tag Mask: 0 00:22:11.781 Protection Information Capabilities: 00:22:11.781 16b Guard Protection Information Storage Tag Support: No 00:22:11.781 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:11.781 Storage Tag Check Read Support: No 00:22:11.781 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Namespace ID:2 00:22:11.781 Error Recovery Timeout: Unlimited 00:22:11.781 Command Set Identifier: NVM (00h) 00:22:11.781 Deallocate: Supported 00:22:11.781 Deallocated/Unwritten Error: Supported 00:22:11.781 Deallocated Read Value: All 0x00 00:22:11.781 Deallocate in Write Zeroes: Not Supported 00:22:11.781 Deallocated Guard Field: 0xFFFF 00:22:11.781 Flush: Supported 00:22:11.781 Reservation: Not Supported 00:22:11.781 Namespace Sharing Capabilities: Private 00:22:11.781 Size (in LBAs): 1048576 (4GiB) 00:22:11.781 Capacity (in LBAs): 1048576 (4GiB) 00:22:11.781 Utilization (in LBAs): 1048576 (4GiB) 00:22:11.781 Thin Provisioning: Not Supported 00:22:11.781 Per-NS Atomic Units: No 00:22:11.781 Maximum Single Source Range Length: 128 00:22:11.781 Maximum Copy Length: 128 00:22:11.781 Maximum Source Range Count: 128 00:22:11.781 NGUID/EUI64 Never Reused: No 00:22:11.781 Namespace Write Protected: No 00:22:11.781 Number of LBA Formats: 8 00:22:11.781 Current LBA Format: LBA Format #04 00:22:11.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:11.781 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:11.781 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:11.781 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:11.781 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:11.781 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:11.781 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:11.781 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:11.781 00:22:11.781 NVM Specific Namespace Data 00:22:11.781 =========================== 00:22:11.781 Logical Block Storage Tag Mask: 0 00:22:11.781 Protection Information Capabilities: 00:22:11.781 16b Guard Protection Information Storage Tag Support: No 00:22:11.781 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:11.781 Storage Tag Check Read Support: No 00:22:11.781 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Namespace ID:3 00:22:11.781 Error Recovery Timeout: Unlimited 00:22:11.781 Command Set Identifier: NVM (00h) 00:22:11.781 Deallocate: Supported 00:22:11.781 Deallocated/Unwritten Error: Supported 00:22:11.781 Deallocated Read Value: All 0x00 00:22:11.781 Deallocate in Write Zeroes: Not Supported 00:22:11.781 Deallocated Guard Field: 0xFFFF 00:22:11.781 Flush: Supported 00:22:11.781 Reservation: Not Supported 00:22:11.781 Namespace Sharing Capabilities: Private 00:22:11.781 Size (in LBAs): 1048576 (4GiB) 00:22:11.781 Capacity (in LBAs): 1048576 (4GiB) 00:22:11.781 Utilization (in LBAs): 1048576 (4GiB) 00:22:11.781 Thin Provisioning: Not Supported 00:22:11.781 Per-NS Atomic Units: No 00:22:11.781 Maximum Single Source Range Length: 128 00:22:11.781 Maximum Copy Length: 128 00:22:11.781 Maximum Source Range Count: 128 00:22:11.781 NGUID/EUI64 Never Reused: No 00:22:11.781 Namespace Write Protected: No 00:22:11.781 Number of LBA Formats: 8 00:22:11.781 Current LBA Format: LBA Format #04 00:22:11.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:11.781 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:11.781 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:11.781 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:11.781 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:11.781 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:11.781 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:11.781 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:11.781 00:22:11.781 NVM Specific Namespace Data 00:22:11.781 =========================== 00:22:11.781 Logical Block Storage Tag Mask: 0 00:22:11.781 Protection Information Capabilities: 00:22:11.781 16b Guard Protection Information Storage Tag Support: No 00:22:11.781 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:11.781 Storage Tag Check Read Support: No 00:22:11.781 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:11.781 18:49:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:22:11.781 18:49:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:22:12.349 ===================================================== 00:22:12.349 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:12.349 ===================================================== 00:22:12.349 Controller Capabilities/Features 00:22:12.349 ================================ 00:22:12.349 Vendor ID: 1b36 00:22:12.349 Subsystem Vendor ID: 1af4 00:22:12.349 Serial Number: 12343 00:22:12.349 Model Number: QEMU NVMe Ctrl 00:22:12.349 Firmware Version: 8.0.0 00:22:12.349 Recommended Arb Burst: 6 00:22:12.349 IEEE OUI Identifier: 00 54 52 00:22:12.349 Multi-path I/O 00:22:12.349 May have multiple subsystem ports: No 00:22:12.349 May have multiple controllers: Yes 00:22:12.349 Associated with SR-IOV VF: No 00:22:12.349 Max Data Transfer Size: 524288 00:22:12.349 Max Number of Namespaces: 256 00:22:12.349 Max Number of I/O Queues: 64 00:22:12.349 NVMe Specification Version (VS): 1.4 00:22:12.349 NVMe Specification Version (Identify): 1.4 00:22:12.349 Maximum Queue Entries: 2048 00:22:12.349 Contiguous Queues Required: Yes 00:22:12.349 Arbitration Mechanisms Supported 00:22:12.349 Weighted Round Robin: Not Supported 00:22:12.349 Vendor Specific: Not Supported 00:22:12.349 Reset Timeout: 7500 ms 00:22:12.349 Doorbell Stride: 4 bytes 00:22:12.349 NVM Subsystem Reset: Not Supported 00:22:12.349 Command Sets Supported 00:22:12.349 NVM Command Set: Supported 00:22:12.349 Boot Partition: Not Supported 00:22:12.349 Memory Page Size Minimum: 4096 bytes 00:22:12.349 Memory Page Size Maximum: 65536 bytes 00:22:12.349 Persistent Memory Region: Not Supported 00:22:12.349 Optional Asynchronous Events Supported 00:22:12.349 Namespace Attribute Notices: Supported 00:22:12.349 Firmware Activation Notices: Not Supported 00:22:12.349 ANA Change Notices: Not Supported 00:22:12.349 PLE Aggregate Log Change Notices: Not Supported 00:22:12.349 LBA Status Info Alert Notices: Not Supported 00:22:12.349 EGE Aggregate Log Change Notices: Not Supported 00:22:12.349 Normal NVM Subsystem Shutdown event: Not Supported 00:22:12.349 Zone Descriptor Change Notices: Not Supported 00:22:12.349 Discovery Log Change Notices: Not Supported 00:22:12.349 Controller Attributes 00:22:12.349 128-bit Host Identifier: Not Supported 00:22:12.349 Non-Operational Permissive Mode: Not Supported 00:22:12.349 NVM Sets: Not Supported 00:22:12.349 Read Recovery Levels: Not Supported 00:22:12.349 Endurance Groups: Supported 00:22:12.349 Predictable Latency Mode: Not Supported 00:22:12.349 Traffic Based Keep ALive: Not Supported 00:22:12.349 Namespace Granularity: Not Supported 00:22:12.349 SQ Associations: Not Supported 00:22:12.349 UUID List: Not Supported 00:22:12.349 Multi-Domain Subsystem: Not Supported 00:22:12.349 Fixed Capacity Management: Not Supported 00:22:12.349 Variable Capacity Management: Not Supported 00:22:12.349 Delete Endurance Group: Not Supported 00:22:12.349 Delete NVM Set: Not Supported 00:22:12.350 Extended LBA Formats Supported: Supported 00:22:12.350 Flexible Data Placement Supported: Supported 00:22:12.350 00:22:12.350 Controller Memory Buffer Support 00:22:12.350 ================================ 00:22:12.350 Supported: No 00:22:12.350 00:22:12.350 Persistent Memory Region Support 00:22:12.350 ================================ 00:22:12.350 Supported: No 00:22:12.350 00:22:12.350 Admin Command Set Attributes 00:22:12.350 ============================ 00:22:12.350 Security Send/Receive: Not Supported 00:22:12.350 Format NVM: Supported 00:22:12.350 Firmware Activate/Download: Not Supported 00:22:12.350 Namespace Management: Supported 00:22:12.350 Device Self-Test: Not Supported 00:22:12.350 Directives: Supported 00:22:12.350 NVMe-MI: Not Supported 00:22:12.350 Virtualization Management: Not Supported 00:22:12.350 Doorbell Buffer Config: Supported 00:22:12.350 Get LBA Status Capability: Not Supported 00:22:12.350 Command & Feature Lockdown Capability: Not Supported 00:22:12.350 Abort Command Limit: 4 00:22:12.350 Async Event Request Limit: 4 00:22:12.350 Number of Firmware Slots: N/A 00:22:12.350 Firmware Slot 1 Read-Only: N/A 00:22:12.350 Firmware Activation Without Reset: N/A 00:22:12.350 Multiple Update Detection Support: N/A 00:22:12.350 Firmware Update Granularity: No Information Provided 00:22:12.350 Per-Namespace SMART Log: Yes 00:22:12.350 Asymmetric Namespace Access Log Page: Not Supported 00:22:12.350 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:22:12.350 Command Effects Log Page: Supported 00:22:12.350 Get Log Page Extended Data: Supported 00:22:12.350 Telemetry Log Pages: Not Supported 00:22:12.350 Persistent Event Log Pages: Not Supported 00:22:12.350 Supported Log Pages Log Page: May Support 00:22:12.350 Commands Supported & Effects Log Page: Not Supported 00:22:12.350 Feature Identifiers & Effects Log Page:May Support 00:22:12.350 NVMe-MI Commands & Effects Log Page: May Support 00:22:12.350 Data Area 4 for Telemetry Log: Not Supported 00:22:12.350 Error Log Page Entries Supported: 1 00:22:12.350 Keep Alive: Not Supported 00:22:12.350 00:22:12.350 NVM Command Set Attributes 00:22:12.350 ========================== 00:22:12.350 Submission Queue Entry Size 00:22:12.350 Max: 64 00:22:12.350 Min: 64 00:22:12.350 Completion Queue Entry Size 00:22:12.350 Max: 16 00:22:12.350 Min: 16 00:22:12.350 Number of Namespaces: 256 00:22:12.350 Compare Command: Supported 00:22:12.350 Write Uncorrectable Command: Not Supported 00:22:12.350 Dataset Management Command: Supported 00:22:12.350 Write Zeroes Command: Supported 00:22:12.350 Set Features Save Field: Supported 00:22:12.350 Reservations: Not Supported 00:22:12.350 Timestamp: Supported 00:22:12.350 Copy: Supported 00:22:12.350 Volatile Write Cache: Present 00:22:12.350 Atomic Write Unit (Normal): 1 00:22:12.350 Atomic Write Unit (PFail): 1 00:22:12.350 Atomic Compare & Write Unit: 1 00:22:12.350 Fused Compare & Write: Not Supported 00:22:12.350 Scatter-Gather List 00:22:12.350 SGL Command Set: Supported 00:22:12.350 SGL Keyed: Not Supported 00:22:12.350 SGL Bit Bucket Descriptor: Not Supported 00:22:12.350 SGL Metadata Pointer: Not Supported 00:22:12.350 Oversized SGL: Not Supported 00:22:12.350 SGL Metadata Address: Not Supported 00:22:12.350 SGL Offset: Not Supported 00:22:12.350 Transport SGL Data Block: Not Supported 00:22:12.350 Replay Protected Memory Block: Not Supported 00:22:12.350 00:22:12.350 Firmware Slot Information 00:22:12.350 ========================= 00:22:12.350 Active slot: 1 00:22:12.350 Slot 1 Firmware Revision: 1.0 00:22:12.350 00:22:12.350 00:22:12.350 Commands Supported and Effects 00:22:12.350 ============================== 00:22:12.350 Admin Commands 00:22:12.350 -------------- 00:22:12.350 Delete I/O Submission Queue (00h): Supported 00:22:12.350 Create I/O Submission Queue (01h): Supported 00:22:12.350 Get Log Page (02h): Supported 00:22:12.350 Delete I/O Completion Queue (04h): Supported 00:22:12.350 Create I/O Completion Queue (05h): Supported 00:22:12.350 Identify (06h): Supported 00:22:12.350 Abort (08h): Supported 00:22:12.350 Set Features (09h): Supported 00:22:12.350 Get Features (0Ah): Supported 00:22:12.350 Asynchronous Event Request (0Ch): Supported 00:22:12.350 Namespace Attachment (15h): Supported NS-Inventory-Change 00:22:12.350 Directive Send (19h): Supported 00:22:12.350 Directive Receive (1Ah): Supported 00:22:12.350 Virtualization Management (1Ch): Supported 00:22:12.350 Doorbell Buffer Config (7Ch): Supported 00:22:12.350 Format NVM (80h): Supported LBA-Change 00:22:12.350 I/O Commands 00:22:12.350 ------------ 00:22:12.350 Flush (00h): Supported LBA-Change 00:22:12.350 Write (01h): Supported LBA-Change 00:22:12.350 Read (02h): Supported 00:22:12.350 Compare (05h): Supported 00:22:12.350 Write Zeroes (08h): Supported LBA-Change 00:22:12.350 Dataset Management (09h): Supported LBA-Change 00:22:12.350 Unknown (0Ch): Supported 00:22:12.350 Unknown (12h): Supported 00:22:12.350 Copy (19h): Supported LBA-Change 00:22:12.350 Unknown (1Dh): Supported LBA-Change 00:22:12.350 00:22:12.350 Error Log 00:22:12.350 ========= 00:22:12.350 00:22:12.350 Arbitration 00:22:12.350 =========== 00:22:12.350 Arbitration Burst: no limit 00:22:12.350 00:22:12.350 Power Management 00:22:12.350 ================ 00:22:12.350 Number of Power States: 1 00:22:12.350 Current Power State: Power State #0 00:22:12.350 Power State #0: 00:22:12.350 Max Power: 25.00 W 00:22:12.350 Non-Operational State: Operational 00:22:12.350 Entry Latency: 16 microseconds 00:22:12.350 Exit Latency: 4 microseconds 00:22:12.350 Relative Read Throughput: 0 00:22:12.350 Relative Read Latency: 0 00:22:12.350 Relative Write Throughput: 0 00:22:12.350 Relative Write Latency: 0 00:22:12.350 Idle Power: Not Reported 00:22:12.350 Active Power: Not Reported 00:22:12.350 Non-Operational Permissive Mode: Not Supported 00:22:12.350 00:22:12.350 Health Information 00:22:12.350 ================== 00:22:12.350 Critical Warnings: 00:22:12.350 Available Spare Space: OK 00:22:12.350 Temperature: OK 00:22:12.350 Device Reliability: OK 00:22:12.350 Read Only: No 00:22:12.350 Volatile Memory Backup: OK 00:22:12.350 Current Temperature: 323 Kelvin (50 Celsius) 00:22:12.350 Temperature Threshold: 343 Kelvin (70 Celsius) 00:22:12.350 Available Spare: 0% 00:22:12.350 Available Spare Threshold: 0% 00:22:12.350 Life Percentage Used: 0% 00:22:12.350 Data Units Read: 760 00:22:12.350 Data Units Written: 689 00:22:12.350 Host Read Commands: 32115 00:22:12.350 Host Write Commands: 31538 00:22:12.350 Controller Busy Time: 0 minutes 00:22:12.350 Power Cycles: 0 00:22:12.350 Power On Hours: 0 hours 00:22:12.350 Unsafe Shutdowns: 0 00:22:12.350 Unrecoverable Media Errors: 0 00:22:12.350 Lifetime Error Log Entries: 0 00:22:12.350 Warning Temperature Time: 0 minutes 00:22:12.350 Critical Temperature Time: 0 minutes 00:22:12.350 00:22:12.350 Number of Queues 00:22:12.350 ================ 00:22:12.350 Number of I/O Submission Queues: 64 00:22:12.350 Number of I/O Completion Queues: 64 00:22:12.350 00:22:12.350 ZNS Specific Controller Data 00:22:12.350 ============================ 00:22:12.350 Zone Append Size Limit: 0 00:22:12.350 00:22:12.350 00:22:12.350 Active Namespaces 00:22:12.350 ================= 00:22:12.350 Namespace ID:1 00:22:12.350 Error Recovery Timeout: Unlimited 00:22:12.350 Command Set Identifier: NVM (00h) 00:22:12.350 Deallocate: Supported 00:22:12.350 Deallocated/Unwritten Error: Supported 00:22:12.350 Deallocated Read Value: All 0x00 00:22:12.350 Deallocate in Write Zeroes: Not Supported 00:22:12.350 Deallocated Guard Field: 0xFFFF 00:22:12.350 Flush: Supported 00:22:12.350 Reservation: Not Supported 00:22:12.350 Namespace Sharing Capabilities: Multiple Controllers 00:22:12.350 Size (in LBAs): 262144 (1GiB) 00:22:12.350 Capacity (in LBAs): 262144 (1GiB) 00:22:12.350 Utilization (in LBAs): 262144 (1GiB) 00:22:12.350 Thin Provisioning: Not Supported 00:22:12.350 Per-NS Atomic Units: No 00:22:12.350 Maximum Single Source Range Length: 128 00:22:12.350 Maximum Copy Length: 128 00:22:12.350 Maximum Source Range Count: 128 00:22:12.350 NGUID/EUI64 Never Reused: No 00:22:12.350 Namespace Write Protected: No 00:22:12.350 Endurance group ID: 1 00:22:12.350 Number of LBA Formats: 8 00:22:12.350 Current LBA Format: LBA Format #04 00:22:12.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:12.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:22:12.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:22:12.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:22:12.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:22:12.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:22:12.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:22:12.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:22:12.350 00:22:12.350 Get Feature FDP: 00:22:12.350 ================ 00:22:12.350 Enabled: Yes 00:22:12.350 FDP configuration index: 0 00:22:12.350 00:22:12.350 FDP configurations log page 00:22:12.350 =========================== 00:22:12.350 Number of FDP configurations: 1 00:22:12.350 Version: 0 00:22:12.350 Size: 112 00:22:12.350 FDP Configuration Descriptor: 0 00:22:12.350 Descriptor Size: 96 00:22:12.350 Reclaim Group Identifier format: 2 00:22:12.350 FDP Volatile Write Cache: Not Present 00:22:12.351 FDP Configuration: Valid 00:22:12.351 Vendor Specific Size: 0 00:22:12.351 Number of Reclaim Groups: 2 00:22:12.351 Number of Recalim Unit Handles: 8 00:22:12.351 Max Placement Identifiers: 128 00:22:12.351 Number of Namespaces Suppprted: 256 00:22:12.351 Reclaim unit Nominal Size: 6000000 bytes 00:22:12.351 Estimated Reclaim Unit Time Limit: Not Reported 00:22:12.351 RUH Desc #000: RUH Type: Initially Isolated 00:22:12.351 RUH Desc #001: RUH Type: Initially Isolated 00:22:12.351 RUH Desc #002: RUH Type: Initially Isolated 00:22:12.351 RUH Desc #003: RUH Type: Initially Isolated 00:22:12.351 RUH Desc #004: RUH Type: Initially Isolated 00:22:12.351 RUH Desc #005: RUH Type: Initially Isolated 00:22:12.351 RUH Desc #006: RUH Type: Initially Isolated 00:22:12.351 RUH Desc #007: RUH Type: Initially Isolated 00:22:12.351 00:22:12.351 FDP reclaim unit handle usage log page 00:22:12.351 ====================================== 00:22:12.351 Number of Reclaim Unit Handles: 8 00:22:12.351 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:22:12.351 RUH Usage Desc #001: RUH Attributes: Unused 00:22:12.351 RUH Usage Desc #002: RUH Attributes: Unused 00:22:12.351 RUH Usage Desc #003: RUH Attributes: Unused 00:22:12.351 RUH Usage Desc #004: RUH Attributes: Unused 00:22:12.351 RUH Usage Desc #005: RUH Attributes: Unused 00:22:12.351 RUH Usage Desc #006: RUH Attributes: Unused 00:22:12.351 RUH Usage Desc #007: RUH Attributes: Unused 00:22:12.351 00:22:12.351 FDP statistics log page 00:22:12.351 ======================= 00:22:12.351 Host bytes with metadata written: 430940160 00:22:12.351 Media bytes with metadata written: 430985216 00:22:12.351 Media bytes erased: 0 00:22:12.351 00:22:12.351 FDP events log page 00:22:12.351 =================== 00:22:12.351 Number of FDP events: 0 00:22:12.351 00:22:12.351 NVM Specific Namespace Data 00:22:12.351 =========================== 00:22:12.351 Logical Block Storage Tag Mask: 0 00:22:12.351 Protection Information Capabilities: 00:22:12.351 16b Guard Protection Information Storage Tag Support: No 00:22:12.351 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:22:12.351 Storage Tag Check Read Support: No 00:22:12.351 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:12.351 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:12.351 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:12.351 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:12.351 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:12.351 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:12.351 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:12.351 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:22:12.351 00:22:12.351 real 0m2.090s 00:22:12.351 user 0m0.719s 00:22:12.351 sys 0m1.112s 00:22:12.351 18:49:40 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:12.351 18:49:40 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:22:12.351 ************************************ 00:22:12.351 END TEST nvme_identify 00:22:12.351 ************************************ 00:22:12.351 18:49:40 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:22:12.351 18:49:40 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:12.351 18:49:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:12.351 18:49:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:12.351 ************************************ 00:22:12.351 START TEST nvme_perf 00:22:12.351 ************************************ 00:22:12.351 18:49:40 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:22:12.351 18:49:40 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:22:13.742 Initializing NVMe Controllers 00:22:13.742 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:13.742 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:13.742 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:13.742 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:13.742 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:13.742 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:22:13.742 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:22:13.742 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:22:13.742 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:22:13.742 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:22:13.742 Initialization complete. Launching workers. 00:22:13.742 ======================================================== 00:22:13.742 Latency(us) 00:22:13.742 Device Information : IOPS MiB/s Average min max 00:22:13.742 PCIE (0000:00:10.0) NSID 1 from core 0: 12146.47 142.34 10573.46 8258.23 51821.44 00:22:13.742 PCIE (0000:00:11.0) NSID 1 from core 0: 12146.47 142.34 10543.69 8332.26 48074.87 00:22:13.742 PCIE (0000:00:13.0) NSID 1 from core 0: 12146.47 142.34 10511.86 8312.82 45231.44 00:22:13.742 PCIE (0000:00:12.0) NSID 1 from core 0: 12146.47 142.34 10479.66 8331.38 41417.05 00:22:13.742 PCIE (0000:00:12.0) NSID 2 from core 0: 12146.47 142.34 10447.62 8339.25 37630.69 00:22:13.742 PCIE (0000:00:12.0) NSID 3 from core 0: 12146.47 142.34 10414.61 8366.58 33920.34 00:22:13.742 ======================================================== 00:22:13.742 Total : 72878.81 854.05 10495.15 8258.23 51821.44 00:22:13.742 00:22:13.742 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:22:13.742 ================================================================================= 00:22:13.742 1.00000% : 8488.472us 00:22:13.742 10.00000% : 8862.964us 00:22:13.742 25.00000% : 9237.455us 00:22:13.742 50.00000% : 9861.608us 00:22:13.742 75.00000% : 10673.006us 00:22:13.742 90.00000% : 11921.310us 00:22:13.742 95.00000% : 14355.505us 00:22:13.742 98.00000% : 16727.284us 00:22:13.742 99.00000% : 38447.787us 00:22:13.742 99.50000% : 48933.547us 00:22:13.742 99.90000% : 51180.495us 00:22:13.742 99.99000% : 51929.478us 00:22:13.742 99.99900% : 51929.478us 00:22:13.742 99.99990% : 51929.478us 00:22:13.742 99.99999% : 51929.478us 00:22:13.742 00:22:13.742 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:22:13.742 ================================================================================= 00:22:13.742 1.00000% : 8550.888us 00:22:13.742 10.00000% : 8925.379us 00:22:13.742 25.00000% : 9299.870us 00:22:13.742 50.00000% : 9799.192us 00:22:13.742 75.00000% : 10673.006us 00:22:13.742 90.00000% : 11921.310us 00:22:13.742 95.00000% : 14355.505us 00:22:13.742 98.00000% : 17101.775us 00:22:13.742 99.00000% : 35951.177us 00:22:13.742 99.50000% : 45438.293us 00:22:13.742 99.90000% : 47685.242us 00:22:13.742 99.99000% : 48184.564us 00:22:13.742 99.99900% : 48184.564us 00:22:13.742 99.99990% : 48184.564us 00:22:13.742 99.99999% : 48184.564us 00:22:13.742 00:22:13.742 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:22:13.742 ================================================================================= 00:22:13.742 1.00000% : 8550.888us 00:22:13.742 10.00000% : 8987.794us 00:22:13.742 25.00000% : 9299.870us 00:22:13.742 50.00000% : 9799.192us 00:22:13.742 75.00000% : 10673.006us 00:22:13.742 90.00000% : 11921.310us 00:22:13.742 95.00000% : 14230.674us 00:22:13.742 98.00000% : 17351.436us 00:22:13.742 99.00000% : 32955.246us 00:22:13.742 99.50000% : 42442.362us 00:22:13.742 99.90000% : 44689.310us 00:22:13.742 99.99000% : 45188.632us 00:22:13.742 99.99900% : 45438.293us 00:22:13.742 99.99990% : 45438.293us 00:22:13.742 99.99999% : 45438.293us 00:22:13.742 00:22:13.742 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:22:13.742 ================================================================================= 00:22:13.742 1.00000% : 8550.888us 00:22:13.742 10.00000% : 8987.794us 00:22:13.742 25.00000% : 9299.870us 00:22:13.742 50.00000% : 9799.192us 00:22:13.742 75.00000% : 10673.006us 00:22:13.742 90.00000% : 11983.726us 00:22:13.742 95.00000% : 14043.429us 00:22:13.742 98.00000% : 17601.097us 00:22:13.742 99.00000% : 29085.501us 00:22:13.742 99.50000% : 38697.448us 00:22:13.742 99.90000% : 40944.396us 00:22:13.742 99.99000% : 41443.718us 00:22:13.742 99.99900% : 41443.718us 00:22:13.742 99.99990% : 41443.718us 00:22:13.742 99.99999% : 41443.718us 00:22:13.742 00:22:13.742 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:22:13.742 ================================================================================= 00:22:13.742 1.00000% : 8550.888us 00:22:13.742 10.00000% : 8925.379us 00:22:13.742 25.00000% : 9299.870us 00:22:13.742 50.00000% : 9799.192us 00:22:13.742 75.00000% : 10673.006us 00:22:13.742 90.00000% : 12046.141us 00:22:13.742 95.00000% : 14105.844us 00:22:13.742 98.00000% : 18100.419us 00:22:13.742 99.00000% : 25465.417us 00:22:13.742 99.50000% : 34952.533us 00:22:13.742 99.90000% : 37199.482us 00:22:13.742 99.99000% : 37698.804us 00:22:13.742 99.99900% : 37698.804us 00:22:13.742 99.99990% : 37698.804us 00:22:13.742 99.99999% : 37698.804us 00:22:13.742 00:22:13.742 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:22:13.742 ================================================================================= 00:22:13.742 1.00000% : 8550.888us 00:22:13.742 10.00000% : 8925.379us 00:22:13.742 25.00000% : 9299.870us 00:22:13.742 50.00000% : 9799.192us 00:22:13.742 75.00000% : 10673.006us 00:22:13.742 90.00000% : 11921.310us 00:22:13.742 95.00000% : 14230.674us 00:22:13.742 98.00000% : 18350.080us 00:22:13.742 99.00000% : 21720.503us 00:22:13.742 99.50000% : 31207.619us 00:22:13.742 99.90000% : 33454.568us 00:22:13.742 99.99000% : 33953.890us 00:22:13.742 99.99900% : 33953.890us 00:22:13.742 99.99990% : 33953.890us 00:22:13.742 99.99999% : 33953.890us 00:22:13.742 00:22:13.742 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:22:13.742 ============================================================================== 00:22:13.742 Range in us Cumulative IO count 00:22:13.742 8238.811 - 8301.227: 0.0411% ( 5) 00:22:13.742 8301.227 - 8363.642: 0.2961% ( 31) 00:22:13.742 8363.642 - 8426.057: 0.7072% ( 50) 00:22:13.742 8426.057 - 8488.472: 1.2007% ( 60) 00:22:13.742 8488.472 - 8550.888: 2.2286% ( 125) 00:22:13.742 8550.888 - 8613.303: 3.3964% ( 142) 00:22:13.742 8613.303 - 8675.718: 4.6382% ( 151) 00:22:13.742 8675.718 - 8738.133: 6.0691% ( 174) 00:22:13.742 8738.133 - 8800.549: 7.8783% ( 220) 00:22:13.742 8800.549 - 8862.964: 10.0411% ( 263) 00:22:13.742 8862.964 - 8925.379: 12.3520% ( 281) 00:22:13.742 8925.379 - 8987.794: 14.7204% ( 288) 00:22:13.742 8987.794 - 9050.210: 17.1875% ( 300) 00:22:13.742 9050.210 - 9112.625: 19.7615% ( 313) 00:22:13.742 9112.625 - 9175.040: 22.5493% ( 339) 00:22:13.742 9175.040 - 9237.455: 25.3372% ( 339) 00:22:13.742 9237.455 - 9299.870: 28.1086% ( 337) 00:22:13.742 9299.870 - 9362.286: 31.0855% ( 362) 00:22:13.742 9362.286 - 9424.701: 33.9556% ( 349) 00:22:13.742 9424.701 - 9487.116: 36.9984% ( 370) 00:22:13.742 9487.116 - 9549.531: 39.7615% ( 336) 00:22:13.742 9549.531 - 9611.947: 42.3191% ( 311) 00:22:13.742 9611.947 - 9674.362: 45.0658% ( 334) 00:22:13.742 9674.362 - 9736.777: 47.3766% ( 281) 00:22:13.742 9736.777 - 9799.192: 49.7944% ( 294) 00:22:13.742 9799.192 - 9861.608: 52.0888% ( 279) 00:22:13.742 9861.608 - 9924.023: 54.6546% ( 312) 00:22:13.742 9924.023 - 9986.438: 57.0312% ( 289) 00:22:13.742 9986.438 - 10048.853: 59.2105% ( 265) 00:22:13.742 10048.853 - 10111.269: 61.2336% ( 246) 00:22:13.742 10111.269 - 10173.684: 63.2977% ( 251) 00:22:13.742 10173.684 - 10236.099: 65.2138% ( 233) 00:22:13.742 10236.099 - 10298.514: 67.0312% ( 221) 00:22:13.742 10298.514 - 10360.930: 68.6924% ( 202) 00:22:13.742 10360.930 - 10423.345: 70.3618% ( 203) 00:22:13.742 10423.345 - 10485.760: 71.8010% ( 175) 00:22:13.742 10485.760 - 10548.175: 73.0263% ( 149) 00:22:13.742 10548.175 - 10610.590: 74.3832% ( 165) 00:22:13.742 10610.590 - 10673.006: 75.4770% ( 133) 00:22:13.742 10673.006 - 10735.421: 76.5132% ( 126) 00:22:13.742 10735.421 - 10797.836: 77.6562% ( 139) 00:22:13.742 10797.836 - 10860.251: 78.5444% ( 108) 00:22:13.742 10860.251 - 10922.667: 79.5724% ( 125) 00:22:13.742 10922.667 - 10985.082: 80.3947% ( 100) 00:22:13.742 10985.082 - 11047.497: 81.2993% ( 110) 00:22:13.742 11047.497 - 11109.912: 82.0970% ( 97) 00:22:13.742 11109.912 - 11172.328: 82.8947% ( 97) 00:22:13.742 11172.328 - 11234.743: 83.7089% ( 99) 00:22:13.742 11234.743 - 11297.158: 84.6135% ( 110) 00:22:13.742 11297.158 - 11359.573: 85.3043% ( 84) 00:22:13.742 11359.573 - 11421.989: 86.1595% ( 104) 00:22:13.742 11421.989 - 11484.404: 86.8257% ( 81) 00:22:13.742 11484.404 - 11546.819: 87.5822% ( 92) 00:22:13.742 11546.819 - 11609.234: 88.2319% ( 79) 00:22:13.742 11609.234 - 11671.650: 88.8405% ( 74) 00:22:13.742 11671.650 - 11734.065: 89.1941% ( 43) 00:22:13.743 11734.065 - 11796.480: 89.5806% ( 47) 00:22:13.743 11796.480 - 11858.895: 89.8191% ( 29) 00:22:13.743 11858.895 - 11921.310: 90.0411% ( 27) 00:22:13.743 11921.310 - 11983.726: 90.2467% ( 25) 00:22:13.743 11983.726 - 12046.141: 90.4441% ( 24) 00:22:13.743 12046.141 - 12108.556: 90.6250% ( 22) 00:22:13.743 12108.556 - 12170.971: 90.8059% ( 22) 00:22:13.743 12170.971 - 12233.387: 90.9457% ( 17) 00:22:13.743 12233.387 - 12295.802: 91.1431% ( 24) 00:22:13.743 12295.802 - 12358.217: 91.2500% ( 13) 00:22:13.743 12358.217 - 12420.632: 91.3980% ( 18) 00:22:13.743 12420.632 - 12483.048: 91.4967% ( 12) 00:22:13.743 12483.048 - 12545.463: 91.6118% ( 14) 00:22:13.743 12545.463 - 12607.878: 91.7023% ( 11) 00:22:13.743 12607.878 - 12670.293: 91.8174% ( 14) 00:22:13.743 12670.293 - 12732.709: 91.9408% ( 15) 00:22:13.743 12732.709 - 12795.124: 92.0148% ( 9) 00:22:13.743 12795.124 - 12857.539: 92.1135% ( 12) 00:22:13.743 12857.539 - 12919.954: 92.1875% ( 9) 00:22:13.743 12919.954 - 12982.370: 92.2780% ( 11) 00:22:13.743 12982.370 - 13044.785: 92.3602% ( 10) 00:22:13.743 13044.785 - 13107.200: 92.4671% ( 13) 00:22:13.743 13107.200 - 13169.615: 92.5905% ( 15) 00:22:13.743 13169.615 - 13232.030: 92.6809% ( 11) 00:22:13.743 13232.030 - 13294.446: 92.8207% ( 17) 00:22:13.743 13294.446 - 13356.861: 92.9030% ( 10) 00:22:13.743 13356.861 - 13419.276: 93.0345% ( 16) 00:22:13.743 13419.276 - 13481.691: 93.1414% ( 13) 00:22:13.743 13481.691 - 13544.107: 93.2484% ( 13) 00:22:13.743 13544.107 - 13606.522: 93.3717% ( 15) 00:22:13.743 13606.522 - 13668.937: 93.5115% ( 17) 00:22:13.743 13668.937 - 13731.352: 93.6431% ( 16) 00:22:13.743 13731.352 - 13793.768: 93.7829% ( 17) 00:22:13.743 13793.768 - 13856.183: 93.9062% ( 15) 00:22:13.743 13856.183 - 13918.598: 94.0461% ( 17) 00:22:13.743 13918.598 - 13981.013: 94.1776% ( 16) 00:22:13.743 13981.013 - 14043.429: 94.3257% ( 18) 00:22:13.743 14043.429 - 14105.844: 94.4984% ( 21) 00:22:13.743 14105.844 - 14168.259: 94.6299% ( 16) 00:22:13.743 14168.259 - 14230.674: 94.7944% ( 20) 00:22:13.743 14230.674 - 14293.090: 94.9671% ( 21) 00:22:13.743 14293.090 - 14355.505: 95.1316% ( 20) 00:22:13.743 14355.505 - 14417.920: 95.2878% ( 19) 00:22:13.743 14417.920 - 14480.335: 95.4359% ( 18) 00:22:13.743 14480.335 - 14542.750: 95.5921% ( 19) 00:22:13.743 14542.750 - 14605.166: 95.7237% ( 16) 00:22:13.743 14605.166 - 14667.581: 95.8553% ( 16) 00:22:13.743 14667.581 - 14729.996: 95.9951% ( 17) 00:22:13.743 14729.996 - 14792.411: 96.1102% ( 14) 00:22:13.743 14792.411 - 14854.827: 96.1842% ( 9) 00:22:13.743 14854.827 - 14917.242: 96.3322% ( 18) 00:22:13.743 14917.242 - 14979.657: 96.3980% ( 8) 00:22:13.743 14979.657 - 15042.072: 96.4720% ( 9) 00:22:13.743 15042.072 - 15104.488: 96.5625% ( 11) 00:22:13.743 15104.488 - 15166.903: 96.6612% ( 12) 00:22:13.743 15166.903 - 15229.318: 96.7352% ( 9) 00:22:13.743 15229.318 - 15291.733: 96.8092% ( 9) 00:22:13.743 15291.733 - 15354.149: 96.8832% ( 9) 00:22:13.743 15354.149 - 15416.564: 96.9737% ( 11) 00:22:13.743 15416.564 - 15478.979: 97.0477% ( 9) 00:22:13.743 15478.979 - 15541.394: 97.1382% ( 11) 00:22:13.743 15541.394 - 15603.810: 97.2204% ( 10) 00:22:13.743 15603.810 - 15666.225: 97.3026% ( 10) 00:22:13.743 15666.225 - 15728.640: 97.3766% ( 9) 00:22:13.743 15728.640 - 15791.055: 97.4507% ( 9) 00:22:13.743 15791.055 - 15853.470: 97.5329% ( 10) 00:22:13.743 15853.470 - 15915.886: 97.6234% ( 11) 00:22:13.743 15915.886 - 15978.301: 97.6727% ( 6) 00:22:13.743 15978.301 - 16103.131: 97.7714% ( 12) 00:22:13.743 16103.131 - 16227.962: 97.8125% ( 5) 00:22:13.743 16227.962 - 16352.792: 97.8454% ( 4) 00:22:13.743 16352.792 - 16477.623: 97.9030% ( 7) 00:22:13.743 16477.623 - 16602.453: 97.9770% ( 9) 00:22:13.743 16602.453 - 16727.284: 98.0181% ( 5) 00:22:13.743 16727.284 - 16852.114: 98.0592% ( 5) 00:22:13.743 16852.114 - 16976.945: 98.1003% ( 5) 00:22:13.743 16976.945 - 17101.775: 98.1497% ( 6) 00:22:13.743 17101.775 - 17226.606: 98.1990% ( 6) 00:22:13.743 17226.606 - 17351.436: 98.2401% ( 5) 00:22:13.743 17351.436 - 17476.267: 98.2812% ( 5) 00:22:13.743 17476.267 - 17601.097: 98.3224% ( 5) 00:22:13.743 17601.097 - 17725.928: 98.3635% ( 5) 00:22:13.743 17725.928 - 17850.758: 98.4128% ( 6) 00:22:13.743 17850.758 - 17975.589: 98.4293% ( 2) 00:22:13.743 17975.589 - 18100.419: 98.4786% ( 6) 00:22:13.743 18100.419 - 18225.250: 98.5197% ( 5) 00:22:13.743 18225.250 - 18350.080: 98.5609% ( 5) 00:22:13.743 18350.080 - 18474.910: 98.6020% ( 5) 00:22:13.743 18474.910 - 18599.741: 98.6349% ( 4) 00:22:13.743 18599.741 - 18724.571: 98.6760% ( 5) 00:22:13.743 18724.571 - 18849.402: 98.7171% ( 5) 00:22:13.743 18849.402 - 18974.232: 98.7582% ( 5) 00:22:13.743 18974.232 - 19099.063: 98.7993% ( 5) 00:22:13.743 19099.063 - 19223.893: 98.8405% ( 5) 00:22:13.743 19223.893 - 19348.724: 98.8816% ( 5) 00:22:13.743 19348.724 - 19473.554: 98.9145% ( 4) 00:22:13.743 19473.554 - 19598.385: 98.9474% ( 4) 00:22:13.743 37948.465 - 38198.126: 98.9638% ( 2) 00:22:13.743 38198.126 - 38447.787: 99.0049% ( 5) 00:22:13.743 38447.787 - 38697.448: 99.0378% ( 4) 00:22:13.743 38697.448 - 38947.109: 99.0789% ( 5) 00:22:13.743 38947.109 - 39196.770: 99.1118% ( 4) 00:22:13.743 39196.770 - 39446.430: 99.1530% ( 5) 00:22:13.743 39446.430 - 39696.091: 99.2023% ( 6) 00:22:13.743 39696.091 - 39945.752: 99.2352% ( 4) 00:22:13.743 39945.752 - 40195.413: 99.2763% ( 5) 00:22:13.743 40195.413 - 40445.074: 99.3174% ( 5) 00:22:13.743 40445.074 - 40694.735: 99.3586% ( 5) 00:22:13.743 40694.735 - 40944.396: 99.3914% ( 4) 00:22:13.743 40944.396 - 41194.057: 99.4243% ( 4) 00:22:13.743 41194.057 - 41443.718: 99.4655% ( 5) 00:22:13.743 41443.718 - 41693.379: 99.4737% ( 1) 00:22:13.743 48434.225 - 48683.886: 99.4819% ( 1) 00:22:13.743 48683.886 - 48933.547: 99.5230% ( 5) 00:22:13.743 48933.547 - 49183.208: 99.5724% ( 6) 00:22:13.743 49183.208 - 49432.869: 99.6053% ( 4) 00:22:13.743 49432.869 - 49682.530: 99.6464% ( 5) 00:22:13.743 49682.530 - 49932.190: 99.6957% ( 6) 00:22:13.743 49932.190 - 50181.851: 99.7368% ( 5) 00:22:13.743 50181.851 - 50431.512: 99.7780% ( 5) 00:22:13.743 50431.512 - 50681.173: 99.8191% ( 5) 00:22:13.743 50681.173 - 50930.834: 99.8602% ( 5) 00:22:13.743 50930.834 - 51180.495: 99.9013% ( 5) 00:22:13.743 51180.495 - 51430.156: 99.9424% ( 5) 00:22:13.743 51430.156 - 51679.817: 99.9836% ( 5) 00:22:13.743 51679.817 - 51929.478: 100.0000% ( 2) 00:22:13.743 00:22:13.743 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:22:13.743 ============================================================================== 00:22:13.743 Range in us Cumulative IO count 00:22:13.743 8301.227 - 8363.642: 0.0164% ( 2) 00:22:13.743 8363.642 - 8426.057: 0.1974% ( 22) 00:22:13.743 8426.057 - 8488.472: 0.6332% ( 53) 00:22:13.743 8488.472 - 8550.888: 1.1266% ( 60) 00:22:13.743 8550.888 - 8613.303: 1.9655% ( 102) 00:22:13.743 8613.303 - 8675.718: 3.1743% ( 147) 00:22:13.743 8675.718 - 8738.133: 4.6217% ( 176) 00:22:13.743 8738.133 - 8800.549: 6.2582% ( 199) 00:22:13.743 8800.549 - 8862.964: 8.0263% ( 215) 00:22:13.743 8862.964 - 8925.379: 10.2961% ( 276) 00:22:13.743 8925.379 - 8987.794: 12.7303% ( 296) 00:22:13.743 8987.794 - 9050.210: 15.2467% ( 306) 00:22:13.743 9050.210 - 9112.625: 18.1497% ( 353) 00:22:13.743 9112.625 - 9175.040: 21.1842% ( 369) 00:22:13.743 9175.040 - 9237.455: 24.3174% ( 381) 00:22:13.743 9237.455 - 9299.870: 27.5329% ( 391) 00:22:13.743 9299.870 - 9362.286: 30.8553% ( 404) 00:22:13.743 9362.286 - 9424.701: 34.1941% ( 406) 00:22:13.743 9424.701 - 9487.116: 37.2533% ( 372) 00:22:13.743 9487.116 - 9549.531: 40.1974% ( 358) 00:22:13.743 9549.531 - 9611.947: 43.0757% ( 350) 00:22:13.743 9611.947 - 9674.362: 45.7319% ( 323) 00:22:13.743 9674.362 - 9736.777: 48.2566% ( 307) 00:22:13.743 9736.777 - 9799.192: 50.7566% ( 304) 00:22:13.743 9799.192 - 9861.608: 53.0921% ( 284) 00:22:13.743 9861.608 - 9924.023: 55.1316% ( 248) 00:22:13.743 9924.023 - 9986.438: 57.1711% ( 248) 00:22:13.743 9986.438 - 10048.853: 59.3750% ( 268) 00:22:13.743 10048.853 - 10111.269: 61.5543% ( 265) 00:22:13.743 10111.269 - 10173.684: 63.7829% ( 271) 00:22:13.743 10173.684 - 10236.099: 65.7072% ( 234) 00:22:13.743 10236.099 - 10298.514: 67.4260% ( 209) 00:22:13.743 10298.514 - 10360.930: 69.0378% ( 196) 00:22:13.743 10360.930 - 10423.345: 70.6003% ( 190) 00:22:13.743 10423.345 - 10485.760: 71.9572% ( 165) 00:22:13.743 10485.760 - 10548.175: 73.3141% ( 165) 00:22:13.743 10548.175 - 10610.590: 74.4655% ( 140) 00:22:13.743 10610.590 - 10673.006: 75.6003% ( 138) 00:22:13.743 10673.006 - 10735.421: 76.7352% ( 138) 00:22:13.743 10735.421 - 10797.836: 77.7385% ( 122) 00:22:13.743 10797.836 - 10860.251: 78.6924% ( 116) 00:22:13.743 10860.251 - 10922.667: 79.6628% ( 118) 00:22:13.743 10922.667 - 10985.082: 80.6086% ( 115) 00:22:13.743 10985.082 - 11047.497: 81.5378% ( 113) 00:22:13.743 11047.497 - 11109.912: 82.4342% ( 109) 00:22:13.743 11109.912 - 11172.328: 83.3306% ( 109) 00:22:13.743 11172.328 - 11234.743: 84.2188% ( 108) 00:22:13.743 11234.743 - 11297.158: 85.1480% ( 113) 00:22:13.743 11297.158 - 11359.573: 85.9704% ( 100) 00:22:13.743 11359.573 - 11421.989: 86.8257% ( 104) 00:22:13.743 11421.989 - 11484.404: 87.5493% ( 88) 00:22:13.743 11484.404 - 11546.819: 88.1579% ( 74) 00:22:13.743 11546.819 - 11609.234: 88.5691% ( 50) 00:22:13.743 11609.234 - 11671.650: 88.8898% ( 39) 00:22:13.743 11671.650 - 11734.065: 89.2105% ( 39) 00:22:13.743 11734.065 - 11796.480: 89.4901% ( 34) 00:22:13.743 11796.480 - 11858.895: 89.7697% ( 34) 00:22:13.743 11858.895 - 11921.310: 90.0247% ( 31) 00:22:13.743 11921.310 - 11983.726: 90.2878% ( 32) 00:22:13.743 11983.726 - 12046.141: 90.5263% ( 29) 00:22:13.743 12046.141 - 12108.556: 90.7730% ( 30) 00:22:13.743 12108.556 - 12170.971: 90.9868% ( 26) 00:22:13.743 12170.971 - 12233.387: 91.1513% ( 20) 00:22:13.743 12233.387 - 12295.802: 91.2829% ( 16) 00:22:13.743 12295.802 - 12358.217: 91.4062% ( 15) 00:22:13.743 12358.217 - 12420.632: 91.5214% ( 14) 00:22:13.743 12420.632 - 12483.048: 91.6447% ( 15) 00:22:13.743 12483.048 - 12545.463: 91.7681% ( 15) 00:22:13.743 12545.463 - 12607.878: 91.8668% ( 12) 00:22:13.743 12607.878 - 12670.293: 91.9819% ( 14) 00:22:13.743 12670.293 - 12732.709: 92.0724% ( 11) 00:22:13.744 12732.709 - 12795.124: 92.1546% ( 10) 00:22:13.744 12795.124 - 12857.539: 92.2122% ( 7) 00:22:13.744 12857.539 - 12919.954: 92.2697% ( 7) 00:22:13.744 12919.954 - 12982.370: 92.3273% ( 7) 00:22:13.744 12982.370 - 13044.785: 92.3849% ( 7) 00:22:13.744 13044.785 - 13107.200: 92.4671% ( 10) 00:22:13.744 13107.200 - 13169.615: 92.5329% ( 8) 00:22:13.744 13169.615 - 13232.030: 92.5905% ( 7) 00:22:13.744 13232.030 - 13294.446: 92.6809% ( 11) 00:22:13.744 13294.446 - 13356.861: 92.7961% ( 14) 00:22:13.744 13356.861 - 13419.276: 92.9194% ( 15) 00:22:13.744 13419.276 - 13481.691: 93.0181% ( 12) 00:22:13.744 13481.691 - 13544.107: 93.1250% ( 13) 00:22:13.744 13544.107 - 13606.522: 93.2484% ( 15) 00:22:13.744 13606.522 - 13668.937: 93.3553% ( 13) 00:22:13.744 13668.937 - 13731.352: 93.5197% ( 20) 00:22:13.744 13731.352 - 13793.768: 93.6760% ( 19) 00:22:13.744 13793.768 - 13856.183: 93.8322% ( 19) 00:22:13.744 13856.183 - 13918.598: 93.9885% ( 19) 00:22:13.744 13918.598 - 13981.013: 94.1365% ( 18) 00:22:13.744 13981.013 - 14043.429: 94.3092% ( 21) 00:22:13.744 14043.429 - 14105.844: 94.4655% ( 19) 00:22:13.744 14105.844 - 14168.259: 94.6299% ( 20) 00:22:13.744 14168.259 - 14230.674: 94.7862% ( 19) 00:22:13.744 14230.674 - 14293.090: 94.9424% ( 19) 00:22:13.744 14293.090 - 14355.505: 95.1151% ( 21) 00:22:13.744 14355.505 - 14417.920: 95.2796% ( 20) 00:22:13.744 14417.920 - 14480.335: 95.4276% ( 18) 00:22:13.744 14480.335 - 14542.750: 95.5839% ( 19) 00:22:13.744 14542.750 - 14605.166: 95.7401% ( 19) 00:22:13.744 14605.166 - 14667.581: 95.8799% ( 17) 00:22:13.744 14667.581 - 14729.996: 96.0362% ( 19) 00:22:13.744 14729.996 - 14792.411: 96.2089% ( 21) 00:22:13.744 14792.411 - 14854.827: 96.3405% ( 16) 00:22:13.744 14854.827 - 14917.242: 96.4391% ( 12) 00:22:13.744 14917.242 - 14979.657: 96.5132% ( 9) 00:22:13.744 14979.657 - 15042.072: 96.5707% ( 7) 00:22:13.744 15042.072 - 15104.488: 96.6447% ( 9) 00:22:13.744 15104.488 - 15166.903: 96.6941% ( 6) 00:22:13.744 15166.903 - 15229.318: 96.7763% ( 10) 00:22:13.744 15229.318 - 15291.733: 96.8503% ( 9) 00:22:13.744 15291.733 - 15354.149: 96.8997% ( 6) 00:22:13.744 15354.149 - 15416.564: 96.9901% ( 11) 00:22:13.744 15416.564 - 15478.979: 97.0724% ( 10) 00:22:13.744 15478.979 - 15541.394: 97.1546% ( 10) 00:22:13.744 15541.394 - 15603.810: 97.2122% ( 7) 00:22:13.744 15603.810 - 15666.225: 97.2780% ( 8) 00:22:13.744 15666.225 - 15728.640: 97.3355% ( 7) 00:22:13.744 15728.640 - 15791.055: 97.4013% ( 8) 00:22:13.744 15791.055 - 15853.470: 97.4507% ( 6) 00:22:13.744 15853.470 - 15915.886: 97.4918% ( 5) 00:22:13.744 15915.886 - 15978.301: 97.5247% ( 4) 00:22:13.744 15978.301 - 16103.131: 97.5822% ( 7) 00:22:13.744 16103.131 - 16227.962: 97.6151% ( 4) 00:22:13.744 16227.962 - 16352.792: 97.6480% ( 4) 00:22:13.744 16352.792 - 16477.623: 97.6727% ( 3) 00:22:13.744 16477.623 - 16602.453: 97.7385% ( 8) 00:22:13.744 16602.453 - 16727.284: 97.8125% ( 9) 00:22:13.744 16727.284 - 16852.114: 97.8947% ( 10) 00:22:13.744 16852.114 - 16976.945: 97.9770% ( 10) 00:22:13.744 16976.945 - 17101.775: 98.0592% ( 10) 00:22:13.744 17101.775 - 17226.606: 98.1497% ( 11) 00:22:13.744 17226.606 - 17351.436: 98.2155% ( 8) 00:22:13.744 17351.436 - 17476.267: 98.2977% ( 10) 00:22:13.744 17476.267 - 17601.097: 98.3470% ( 6) 00:22:13.744 17601.097 - 17725.928: 98.4046% ( 7) 00:22:13.744 17725.928 - 17850.758: 98.4457% ( 5) 00:22:13.744 17850.758 - 17975.589: 98.4951% ( 6) 00:22:13.744 17975.589 - 18100.419: 98.5526% ( 7) 00:22:13.744 18100.419 - 18225.250: 98.6020% ( 6) 00:22:13.744 18225.250 - 18350.080: 98.6431% ( 5) 00:22:13.744 18350.080 - 18474.910: 98.6924% ( 6) 00:22:13.744 18474.910 - 18599.741: 98.7500% ( 7) 00:22:13.744 18599.741 - 18724.571: 98.7993% ( 6) 00:22:13.744 18724.571 - 18849.402: 98.8487% ( 6) 00:22:13.744 18849.402 - 18974.232: 98.8898% ( 5) 00:22:13.744 18974.232 - 19099.063: 98.9474% ( 7) 00:22:13.744 35451.855 - 35701.516: 98.9885% ( 5) 00:22:13.744 35701.516 - 35951.177: 99.0296% ( 5) 00:22:13.744 35951.177 - 36200.838: 99.0707% ( 5) 00:22:13.744 36200.838 - 36450.499: 99.1118% ( 5) 00:22:13.744 36450.499 - 36700.160: 99.1530% ( 5) 00:22:13.744 36700.160 - 36949.821: 99.1941% ( 5) 00:22:13.744 36949.821 - 37199.482: 99.2434% ( 6) 00:22:13.744 37199.482 - 37449.143: 99.2763% ( 4) 00:22:13.744 37449.143 - 37698.804: 99.3257% ( 6) 00:22:13.744 37698.804 - 37948.465: 99.3668% ( 5) 00:22:13.744 37948.465 - 38198.126: 99.3997% ( 4) 00:22:13.744 38198.126 - 38447.787: 99.4490% ( 6) 00:22:13.744 38447.787 - 38697.448: 99.4737% ( 3) 00:22:13.744 44938.971 - 45188.632: 99.4819% ( 1) 00:22:13.744 45188.632 - 45438.293: 99.5312% ( 6) 00:22:13.744 45438.293 - 45687.954: 99.5806% ( 6) 00:22:13.744 45687.954 - 45937.615: 99.6217% ( 5) 00:22:13.744 45937.615 - 46187.276: 99.6628% ( 5) 00:22:13.744 46187.276 - 46436.937: 99.7039% ( 5) 00:22:13.744 46436.937 - 46686.598: 99.7533% ( 6) 00:22:13.744 46686.598 - 46936.259: 99.7944% ( 5) 00:22:13.744 46936.259 - 47185.920: 99.8438% ( 6) 00:22:13.744 47185.920 - 47435.581: 99.8849% ( 5) 00:22:13.744 47435.581 - 47685.242: 99.9260% ( 5) 00:22:13.744 47685.242 - 47934.903: 99.9753% ( 6) 00:22:13.744 47934.903 - 48184.564: 100.0000% ( 3) 00:22:13.744 00:22:13.744 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:22:13.744 ============================================================================== 00:22:13.744 Range in us Cumulative IO count 00:22:13.744 8301.227 - 8363.642: 0.0329% ( 4) 00:22:13.744 8363.642 - 8426.057: 0.2138% ( 22) 00:22:13.744 8426.057 - 8488.472: 0.6826% ( 57) 00:22:13.744 8488.472 - 8550.888: 1.2336% ( 67) 00:22:13.744 8550.888 - 8613.303: 1.8421% ( 74) 00:22:13.744 8613.303 - 8675.718: 3.1168% ( 155) 00:22:13.744 8675.718 - 8738.133: 4.4490% ( 162) 00:22:13.744 8738.133 - 8800.549: 5.8964% ( 176) 00:22:13.744 8800.549 - 8862.964: 7.5987% ( 207) 00:22:13.744 8862.964 - 8925.379: 9.8766% ( 277) 00:22:13.744 8925.379 - 8987.794: 12.2862% ( 293) 00:22:13.744 8987.794 - 9050.210: 14.9342% ( 322) 00:22:13.744 9050.210 - 9112.625: 17.8043% ( 349) 00:22:13.744 9112.625 - 9175.040: 20.8635% ( 372) 00:22:13.744 9175.040 - 9237.455: 24.0049% ( 382) 00:22:13.744 9237.455 - 9299.870: 27.0724% ( 373) 00:22:13.744 9299.870 - 9362.286: 30.3043% ( 393) 00:22:13.744 9362.286 - 9424.701: 33.5033% ( 389) 00:22:13.744 9424.701 - 9487.116: 36.6283% ( 380) 00:22:13.744 9487.116 - 9549.531: 39.6793% ( 371) 00:22:13.744 9549.531 - 9611.947: 42.6398% ( 360) 00:22:13.744 9611.947 - 9674.362: 45.4770% ( 345) 00:22:13.744 9674.362 - 9736.777: 47.9770% ( 304) 00:22:13.744 9736.777 - 9799.192: 50.4194% ( 297) 00:22:13.744 9799.192 - 9861.608: 52.7303% ( 281) 00:22:13.744 9861.608 - 9924.023: 54.9095% ( 265) 00:22:13.744 9924.023 - 9986.438: 56.9655% ( 250) 00:22:13.744 9986.438 - 10048.853: 59.2352% ( 276) 00:22:13.744 10048.853 - 10111.269: 61.5625% ( 283) 00:22:13.744 10111.269 - 10173.684: 63.6020% ( 248) 00:22:13.744 10173.684 - 10236.099: 65.7484% ( 261) 00:22:13.744 10236.099 - 10298.514: 67.5905% ( 224) 00:22:13.744 10298.514 - 10360.930: 69.1612% ( 191) 00:22:13.744 10360.930 - 10423.345: 70.6086% ( 176) 00:22:13.744 10423.345 - 10485.760: 72.0312% ( 173) 00:22:13.744 10485.760 - 10548.175: 73.4128% ( 168) 00:22:13.744 10548.175 - 10610.590: 74.6711% ( 153) 00:22:13.744 10610.590 - 10673.006: 75.9539% ( 156) 00:22:13.744 10673.006 - 10735.421: 77.1382% ( 144) 00:22:13.744 10735.421 - 10797.836: 78.1661% ( 125) 00:22:13.744 10797.836 - 10860.251: 79.1118% ( 115) 00:22:13.744 10860.251 - 10922.667: 80.0247% ( 111) 00:22:13.744 10922.667 - 10985.082: 80.9128% ( 108) 00:22:13.744 10985.082 - 11047.497: 81.8339% ( 112) 00:22:13.744 11047.497 - 11109.912: 82.7467% ( 111) 00:22:13.744 11109.912 - 11172.328: 83.6595% ( 111) 00:22:13.744 11172.328 - 11234.743: 84.5230% ( 105) 00:22:13.744 11234.743 - 11297.158: 85.4276% ( 110) 00:22:13.744 11297.158 - 11359.573: 86.2664% ( 102) 00:22:13.744 11359.573 - 11421.989: 87.1053% ( 102) 00:22:13.744 11421.989 - 11484.404: 87.8125% ( 86) 00:22:13.744 11484.404 - 11546.819: 88.3306% ( 63) 00:22:13.744 11546.819 - 11609.234: 88.7829% ( 55) 00:22:13.744 11609.234 - 11671.650: 89.1365% ( 43) 00:22:13.744 11671.650 - 11734.065: 89.4079% ( 33) 00:22:13.744 11734.065 - 11796.480: 89.6217% ( 26) 00:22:13.744 11796.480 - 11858.895: 89.8109% ( 23) 00:22:13.744 11858.895 - 11921.310: 90.0082% ( 24) 00:22:13.744 11921.310 - 11983.726: 90.1974% ( 23) 00:22:13.744 11983.726 - 12046.141: 90.4030% ( 25) 00:22:13.744 12046.141 - 12108.556: 90.5757% ( 21) 00:22:13.744 12108.556 - 12170.971: 90.7812% ( 25) 00:22:13.744 12170.971 - 12233.387: 90.9539% ( 21) 00:22:13.744 12233.387 - 12295.802: 91.1349% ( 22) 00:22:13.744 12295.802 - 12358.217: 91.2829% ( 18) 00:22:13.744 12358.217 - 12420.632: 91.4391% ( 19) 00:22:13.744 12420.632 - 12483.048: 91.5707% ( 16) 00:22:13.744 12483.048 - 12545.463: 91.7270% ( 19) 00:22:13.744 12545.463 - 12607.878: 91.8503% ( 15) 00:22:13.744 12607.878 - 12670.293: 91.9572% ( 13) 00:22:13.744 12670.293 - 12732.709: 92.0312% ( 9) 00:22:13.744 12732.709 - 12795.124: 92.1053% ( 9) 00:22:13.744 12795.124 - 12857.539: 92.1957% ( 11) 00:22:13.744 12857.539 - 12919.954: 92.2451% ( 6) 00:22:13.744 12919.954 - 12982.370: 92.3109% ( 8) 00:22:13.744 12982.370 - 13044.785: 92.4013% ( 11) 00:22:13.744 13044.785 - 13107.200: 92.5082% ( 13) 00:22:13.744 13107.200 - 13169.615: 92.6069% ( 12) 00:22:13.744 13169.615 - 13232.030: 92.6891% ( 10) 00:22:13.744 13232.030 - 13294.446: 92.8043% ( 14) 00:22:13.744 13294.446 - 13356.861: 92.9194% ( 14) 00:22:13.744 13356.861 - 13419.276: 93.0428% ( 15) 00:22:13.744 13419.276 - 13481.691: 93.1579% ( 14) 00:22:13.744 13481.691 - 13544.107: 93.2895% ( 16) 00:22:13.744 13544.107 - 13606.522: 93.4457% ( 19) 00:22:13.744 13606.522 - 13668.937: 93.6349% ( 23) 00:22:13.744 13668.937 - 13731.352: 93.8158% ( 22) 00:22:13.744 13731.352 - 13793.768: 94.0049% ( 23) 00:22:13.744 13793.768 - 13856.183: 94.1694% ( 20) 00:22:13.744 13856.183 - 13918.598: 94.3339% ( 20) 00:22:13.744 13918.598 - 13981.013: 94.4819% ( 18) 00:22:13.745 13981.013 - 14043.429: 94.6464% ( 20) 00:22:13.745 14043.429 - 14105.844: 94.8026% ( 19) 00:22:13.745 14105.844 - 14168.259: 94.9836% ( 22) 00:22:13.745 14168.259 - 14230.674: 95.1234% ( 17) 00:22:13.745 14230.674 - 14293.090: 95.2796% ( 19) 00:22:13.745 14293.090 - 14355.505: 95.4276% ( 18) 00:22:13.745 14355.505 - 14417.920: 95.5674% ( 17) 00:22:13.745 14417.920 - 14480.335: 95.7319% ( 20) 00:22:13.745 14480.335 - 14542.750: 95.8964% ( 20) 00:22:13.745 14542.750 - 14605.166: 96.0115% ( 14) 00:22:13.745 14605.166 - 14667.581: 96.1431% ( 16) 00:22:13.745 14667.581 - 14729.996: 96.2500% ( 13) 00:22:13.745 14729.996 - 14792.411: 96.3816% ( 16) 00:22:13.745 14792.411 - 14854.827: 96.4556% ( 9) 00:22:13.745 14854.827 - 14917.242: 96.5378% ( 10) 00:22:13.745 14917.242 - 14979.657: 96.6201% ( 10) 00:22:13.745 14979.657 - 15042.072: 96.6776% ( 7) 00:22:13.745 15042.072 - 15104.488: 96.7352% ( 7) 00:22:13.745 15104.488 - 15166.903: 96.7845% ( 6) 00:22:13.745 15166.903 - 15229.318: 96.8257% ( 5) 00:22:13.745 15229.318 - 15291.733: 96.8750% ( 6) 00:22:13.745 15291.733 - 15354.149: 96.9243% ( 6) 00:22:13.745 15354.149 - 15416.564: 96.9655% ( 5) 00:22:13.745 15416.564 - 15478.979: 97.0230% ( 7) 00:22:13.745 15478.979 - 15541.394: 97.0724% ( 6) 00:22:13.745 15541.394 - 15603.810: 97.1217% ( 6) 00:22:13.745 15603.810 - 15666.225: 97.1628% ( 5) 00:22:13.745 15666.225 - 15728.640: 97.2039% ( 5) 00:22:13.745 15728.640 - 15791.055: 97.2451% ( 5) 00:22:13.745 15791.055 - 15853.470: 97.2697% ( 3) 00:22:13.745 15853.470 - 15915.886: 97.3109% ( 5) 00:22:13.745 15915.886 - 15978.301: 97.3438% ( 4) 00:22:13.745 15978.301 - 16103.131: 97.4178% ( 9) 00:22:13.745 16103.131 - 16227.962: 97.4918% ( 9) 00:22:13.745 16227.962 - 16352.792: 97.5329% ( 5) 00:22:13.745 16352.792 - 16477.623: 97.5576% ( 3) 00:22:13.745 16477.623 - 16602.453: 97.5822% ( 3) 00:22:13.745 16602.453 - 16727.284: 97.6151% ( 4) 00:22:13.745 16727.284 - 16852.114: 97.6398% ( 3) 00:22:13.745 16852.114 - 16976.945: 97.6727% ( 4) 00:22:13.745 16976.945 - 17101.775: 97.7796% ( 13) 00:22:13.745 17101.775 - 17226.606: 97.9276% ( 18) 00:22:13.745 17226.606 - 17351.436: 98.0428% ( 14) 00:22:13.745 17351.436 - 17476.267: 98.1743% ( 16) 00:22:13.745 17476.267 - 17601.097: 98.2977% ( 15) 00:22:13.745 17601.097 - 17725.928: 98.4375% ( 17) 00:22:13.745 17725.928 - 17850.758: 98.5609% ( 15) 00:22:13.745 17850.758 - 17975.589: 98.6924% ( 16) 00:22:13.745 17975.589 - 18100.419: 98.8076% ( 14) 00:22:13.745 18100.419 - 18225.250: 98.8980% ( 11) 00:22:13.745 18225.250 - 18350.080: 98.9474% ( 6) 00:22:13.745 32455.924 - 32705.585: 98.9885% ( 5) 00:22:13.745 32705.585 - 32955.246: 99.0296% ( 5) 00:22:13.745 32955.246 - 33204.907: 99.0707% ( 5) 00:22:13.745 33204.907 - 33454.568: 99.1118% ( 5) 00:22:13.745 33454.568 - 33704.229: 99.1530% ( 5) 00:22:13.745 33704.229 - 33953.890: 99.1941% ( 5) 00:22:13.745 33953.890 - 34203.550: 99.2352% ( 5) 00:22:13.745 34203.550 - 34453.211: 99.2763% ( 5) 00:22:13.745 34453.211 - 34702.872: 99.2928% ( 2) 00:22:13.745 34702.872 - 34952.533: 99.3421% ( 6) 00:22:13.745 34952.533 - 35202.194: 99.3832% ( 5) 00:22:13.745 35202.194 - 35451.855: 99.4243% ( 5) 00:22:13.745 35451.855 - 35701.516: 99.4655% ( 5) 00:22:13.745 35701.516 - 35951.177: 99.4737% ( 1) 00:22:13.745 41943.040 - 42192.701: 99.4901% ( 2) 00:22:13.745 42192.701 - 42442.362: 99.5312% ( 5) 00:22:13.745 42442.362 - 42692.023: 99.5724% ( 5) 00:22:13.745 42692.023 - 42941.684: 99.6135% ( 5) 00:22:13.745 42941.684 - 43191.345: 99.6464% ( 4) 00:22:13.745 43191.345 - 43441.006: 99.6957% ( 6) 00:22:13.745 43441.006 - 43690.667: 99.7368% ( 5) 00:22:13.745 43690.667 - 43940.328: 99.7780% ( 5) 00:22:13.745 43940.328 - 44189.989: 99.8191% ( 5) 00:22:13.745 44189.989 - 44439.650: 99.8602% ( 5) 00:22:13.745 44439.650 - 44689.310: 99.9095% ( 6) 00:22:13.745 44689.310 - 44938.971: 99.9507% ( 5) 00:22:13.745 44938.971 - 45188.632: 99.9918% ( 5) 00:22:13.745 45188.632 - 45438.293: 100.0000% ( 1) 00:22:13.745 00:22:13.745 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:22:13.745 ============================================================================== 00:22:13.745 Range in us Cumulative IO count 00:22:13.745 8301.227 - 8363.642: 0.0411% ( 5) 00:22:13.745 8363.642 - 8426.057: 0.2385% ( 24) 00:22:13.745 8426.057 - 8488.472: 0.6250% ( 47) 00:22:13.745 8488.472 - 8550.888: 1.1842% ( 68) 00:22:13.745 8550.888 - 8613.303: 1.9161% ( 89) 00:22:13.745 8613.303 - 8675.718: 3.0839% ( 142) 00:22:13.745 8675.718 - 8738.133: 4.5395% ( 177) 00:22:13.745 8738.133 - 8800.549: 6.0938% ( 189) 00:22:13.745 8800.549 - 8862.964: 7.7138% ( 197) 00:22:13.745 8862.964 - 8925.379: 9.8438% ( 259) 00:22:13.745 8925.379 - 8987.794: 12.1628% ( 282) 00:22:13.745 8987.794 - 9050.210: 14.9013% ( 333) 00:22:13.745 9050.210 - 9112.625: 17.7467% ( 346) 00:22:13.745 9112.625 - 9175.040: 20.6497% ( 353) 00:22:13.745 9175.040 - 9237.455: 23.7829% ( 381) 00:22:13.745 9237.455 - 9299.870: 26.9901% ( 390) 00:22:13.745 9299.870 - 9362.286: 30.2138% ( 392) 00:22:13.745 9362.286 - 9424.701: 33.4539% ( 394) 00:22:13.745 9424.701 - 9487.116: 36.5049% ( 371) 00:22:13.745 9487.116 - 9549.531: 39.4243% ( 355) 00:22:13.745 9549.531 - 9611.947: 42.4589% ( 369) 00:22:13.745 9611.947 - 9674.362: 45.1480% ( 327) 00:22:13.745 9674.362 - 9736.777: 47.7056% ( 311) 00:22:13.745 9736.777 - 9799.192: 50.1809% ( 301) 00:22:13.745 9799.192 - 9861.608: 52.5493% ( 288) 00:22:13.745 9861.608 - 9924.023: 54.7451% ( 267) 00:22:13.745 9924.023 - 9986.438: 56.8010% ( 250) 00:22:13.745 9986.438 - 10048.853: 59.0214% ( 270) 00:22:13.745 10048.853 - 10111.269: 61.3158% ( 279) 00:22:13.745 10111.269 - 10173.684: 63.5033% ( 266) 00:22:13.745 10173.684 - 10236.099: 65.6168% ( 257) 00:22:13.745 10236.099 - 10298.514: 67.4260% ( 220) 00:22:13.745 10298.514 - 10360.930: 69.0872% ( 202) 00:22:13.745 10360.930 - 10423.345: 70.6661% ( 192) 00:22:13.745 10423.345 - 10485.760: 72.0395% ( 167) 00:22:13.745 10485.760 - 10548.175: 73.3224% ( 156) 00:22:13.745 10548.175 - 10610.590: 74.6711% ( 164) 00:22:13.745 10610.590 - 10673.006: 75.8635% ( 145) 00:22:13.745 10673.006 - 10735.421: 76.9737% ( 135) 00:22:13.745 10735.421 - 10797.836: 78.1661% ( 145) 00:22:13.745 10797.836 - 10860.251: 79.2516% ( 132) 00:22:13.745 10860.251 - 10922.667: 80.1974% ( 115) 00:22:13.745 10922.667 - 10985.082: 81.0938% ( 109) 00:22:13.745 10985.082 - 11047.497: 82.0230% ( 113) 00:22:13.745 11047.497 - 11109.912: 82.8701% ( 103) 00:22:13.745 11109.912 - 11172.328: 83.7582% ( 108) 00:22:13.745 11172.328 - 11234.743: 84.5970% ( 102) 00:22:13.745 11234.743 - 11297.158: 85.4852% ( 108) 00:22:13.745 11297.158 - 11359.573: 86.3158% ( 101) 00:22:13.745 11359.573 - 11421.989: 87.1053% ( 96) 00:22:13.745 11421.989 - 11484.404: 87.7385% ( 77) 00:22:13.745 11484.404 - 11546.819: 88.2730% ( 65) 00:22:13.745 11546.819 - 11609.234: 88.6924% ( 51) 00:22:13.745 11609.234 - 11671.650: 89.0132% ( 39) 00:22:13.745 11671.650 - 11734.065: 89.2434% ( 28) 00:22:13.745 11734.065 - 11796.480: 89.4572% ( 26) 00:22:13.745 11796.480 - 11858.895: 89.6875% ( 28) 00:22:13.745 11858.895 - 11921.310: 89.8684% ( 22) 00:22:13.745 11921.310 - 11983.726: 90.0411% ( 21) 00:22:13.745 11983.726 - 12046.141: 90.1974% ( 19) 00:22:13.745 12046.141 - 12108.556: 90.3701% ( 21) 00:22:13.745 12108.556 - 12170.971: 90.5510% ( 22) 00:22:13.745 12170.971 - 12233.387: 90.6990% ( 18) 00:22:13.745 12233.387 - 12295.802: 90.8635% ( 20) 00:22:13.745 12295.802 - 12358.217: 90.9868% ( 15) 00:22:13.745 12358.217 - 12420.632: 91.1513% ( 20) 00:22:13.745 12420.632 - 12483.048: 91.2747% ( 15) 00:22:13.745 12483.048 - 12545.463: 91.4145% ( 17) 00:22:13.745 12545.463 - 12607.878: 91.5378% ( 15) 00:22:13.745 12607.878 - 12670.293: 91.6530% ( 14) 00:22:13.745 12670.293 - 12732.709: 91.7845% ( 16) 00:22:13.745 12732.709 - 12795.124: 91.8750% ( 11) 00:22:13.745 12795.124 - 12857.539: 91.9901% ( 14) 00:22:13.745 12857.539 - 12919.954: 92.1628% ( 21) 00:22:13.745 12919.954 - 12982.370: 92.3026% ( 17) 00:22:13.745 12982.370 - 13044.785: 92.4671% ( 20) 00:22:13.745 13044.785 - 13107.200: 92.6234% ( 19) 00:22:13.745 13107.200 - 13169.615: 92.7549% ( 16) 00:22:13.745 13169.615 - 13232.030: 92.9194% ( 20) 00:22:13.745 13232.030 - 13294.446: 93.0510% ( 16) 00:22:13.745 13294.446 - 13356.861: 93.2072% ( 19) 00:22:13.745 13356.861 - 13419.276: 93.3717% ( 20) 00:22:13.745 13419.276 - 13481.691: 93.5362% ( 20) 00:22:13.745 13481.691 - 13544.107: 93.7089% ( 21) 00:22:13.745 13544.107 - 13606.522: 93.8980% ( 23) 00:22:13.745 13606.522 - 13668.937: 94.0872% ( 23) 00:22:13.745 13668.937 - 13731.352: 94.2681% ( 22) 00:22:13.745 13731.352 - 13793.768: 94.4243% ( 19) 00:22:13.745 13793.768 - 13856.183: 94.5888% ( 20) 00:22:13.745 13856.183 - 13918.598: 94.7615% ( 21) 00:22:13.745 13918.598 - 13981.013: 94.9260% ( 20) 00:22:13.745 13981.013 - 14043.429: 95.0987% ( 21) 00:22:13.745 14043.429 - 14105.844: 95.2385% ( 17) 00:22:13.745 14105.844 - 14168.259: 95.3783% ( 17) 00:22:13.745 14168.259 - 14230.674: 95.4934% ( 14) 00:22:13.745 14230.674 - 14293.090: 95.6168% ( 15) 00:22:13.745 14293.090 - 14355.505: 95.6990% ( 10) 00:22:13.745 14355.505 - 14417.920: 95.8059% ( 13) 00:22:13.745 14417.920 - 14480.335: 95.9046% ( 12) 00:22:13.745 14480.335 - 14542.750: 95.9951% ( 11) 00:22:13.745 14542.750 - 14605.166: 96.0855% ( 11) 00:22:13.745 14605.166 - 14667.581: 96.1760% ( 11) 00:22:13.745 14667.581 - 14729.996: 96.2582% ( 10) 00:22:13.745 14729.996 - 14792.411: 96.2911% ( 4) 00:22:13.745 14792.411 - 14854.827: 96.3158% ( 3) 00:22:13.745 14854.827 - 14917.242: 96.3240% ( 1) 00:22:13.745 14917.242 - 14979.657: 96.3405% ( 2) 00:22:13.745 14979.657 - 15042.072: 96.3734% ( 4) 00:22:13.745 15042.072 - 15104.488: 96.4720% ( 12) 00:22:13.745 15104.488 - 15166.903: 96.5214% ( 6) 00:22:13.745 15166.903 - 15229.318: 96.5872% ( 8) 00:22:13.745 15229.318 - 15291.733: 96.6118% ( 3) 00:22:13.745 15291.733 - 15354.149: 96.6447% ( 4) 00:22:13.745 15354.149 - 15416.564: 96.7105% ( 8) 00:22:13.745 15416.564 - 15478.979: 96.7763% ( 8) 00:22:13.745 15478.979 - 15541.394: 96.8174% ( 5) 00:22:13.745 15541.394 - 15603.810: 96.8586% ( 5) 00:22:13.745 15603.810 - 15666.225: 96.9079% ( 6) 00:22:13.745 15666.225 - 15728.640: 96.9572% ( 6) 00:22:13.746 15728.640 - 15791.055: 97.0066% ( 6) 00:22:13.746 15791.055 - 15853.470: 97.0477% ( 5) 00:22:13.746 15853.470 - 15915.886: 97.0888% ( 5) 00:22:13.746 15915.886 - 15978.301: 97.1382% ( 6) 00:22:13.746 15978.301 - 16103.131: 97.2451% ( 13) 00:22:13.746 16103.131 - 16227.962: 97.3191% ( 9) 00:22:13.746 16227.962 - 16352.792: 97.3684% ( 6) 00:22:13.746 16352.792 - 16477.623: 97.4260% ( 7) 00:22:13.746 16477.623 - 16602.453: 97.4671% ( 5) 00:22:13.746 16602.453 - 16727.284: 97.5247% ( 7) 00:22:13.746 16727.284 - 16852.114: 97.6069% ( 10) 00:22:13.746 16852.114 - 16976.945: 97.6727% ( 8) 00:22:13.746 16976.945 - 17101.775: 97.7549% ( 10) 00:22:13.746 17101.775 - 17226.606: 97.8289% ( 9) 00:22:13.746 17226.606 - 17351.436: 97.9112% ( 10) 00:22:13.746 17351.436 - 17476.267: 97.9934% ( 10) 00:22:13.746 17476.267 - 17601.097: 98.1003% ( 13) 00:22:13.746 17601.097 - 17725.928: 98.2155% ( 14) 00:22:13.746 17725.928 - 17850.758: 98.2977% ( 10) 00:22:13.746 17850.758 - 17975.589: 98.3799% ( 10) 00:22:13.746 17975.589 - 18100.419: 98.4539% ( 9) 00:22:13.746 18100.419 - 18225.250: 98.5362% ( 10) 00:22:13.746 18225.250 - 18350.080: 98.6102% ( 9) 00:22:13.746 18350.080 - 18474.910: 98.6842% ( 9) 00:22:13.746 18474.910 - 18599.741: 98.7664% ( 10) 00:22:13.746 18599.741 - 18724.571: 98.8569% ( 11) 00:22:13.746 18724.571 - 18849.402: 98.8816% ( 3) 00:22:13.746 18849.402 - 18974.232: 98.9145% ( 4) 00:22:13.746 18974.232 - 19099.063: 98.9391% ( 3) 00:22:13.746 19099.063 - 19223.893: 98.9474% ( 1) 00:22:13.746 28711.010 - 28835.840: 98.9720% ( 3) 00:22:13.746 28835.840 - 28960.670: 98.9885% ( 2) 00:22:13.746 28960.670 - 29085.501: 99.0132% ( 3) 00:22:13.746 29085.501 - 29210.331: 99.0296% ( 2) 00:22:13.746 29210.331 - 29335.162: 99.0461% ( 2) 00:22:13.746 29335.162 - 29459.992: 99.0707% ( 3) 00:22:13.746 29459.992 - 29584.823: 99.0872% ( 2) 00:22:13.746 29584.823 - 29709.653: 99.1118% ( 3) 00:22:13.746 29709.653 - 29834.484: 99.1283% ( 2) 00:22:13.746 29834.484 - 29959.314: 99.1447% ( 2) 00:22:13.746 29959.314 - 30084.145: 99.1694% ( 3) 00:22:13.746 30084.145 - 30208.975: 99.1859% ( 2) 00:22:13.746 30208.975 - 30333.806: 99.2105% ( 3) 00:22:13.746 30333.806 - 30458.636: 99.2270% ( 2) 00:22:13.746 30458.636 - 30583.467: 99.2516% ( 3) 00:22:13.746 30583.467 - 30708.297: 99.2681% ( 2) 00:22:13.746 30708.297 - 30833.128: 99.2928% ( 3) 00:22:13.746 30833.128 - 30957.958: 99.3174% ( 3) 00:22:13.746 30957.958 - 31082.789: 99.3421% ( 3) 00:22:13.746 31082.789 - 31207.619: 99.3668% ( 3) 00:22:13.746 31207.619 - 31332.450: 99.3832% ( 2) 00:22:13.746 31332.450 - 31457.280: 99.3997% ( 2) 00:22:13.746 31457.280 - 31582.110: 99.4161% ( 2) 00:22:13.746 31582.110 - 31706.941: 99.4408% ( 3) 00:22:13.746 31706.941 - 31831.771: 99.4572% ( 2) 00:22:13.746 31831.771 - 31956.602: 99.4737% ( 2) 00:22:13.746 38198.126 - 38447.787: 99.4984% ( 3) 00:22:13.746 38447.787 - 38697.448: 99.5395% ( 5) 00:22:13.746 38697.448 - 38947.109: 99.5724% ( 4) 00:22:13.746 38947.109 - 39196.770: 99.6135% ( 5) 00:22:13.746 39196.770 - 39446.430: 99.6464% ( 4) 00:22:13.746 39446.430 - 39696.091: 99.6875% ( 5) 00:22:13.746 39696.091 - 39945.752: 99.7286% ( 5) 00:22:13.746 39945.752 - 40195.413: 99.7780% ( 6) 00:22:13.746 40195.413 - 40445.074: 99.8191% ( 5) 00:22:13.746 40445.074 - 40694.735: 99.8684% ( 6) 00:22:13.746 40694.735 - 40944.396: 99.9095% ( 5) 00:22:13.746 40944.396 - 41194.057: 99.9589% ( 6) 00:22:13.746 41194.057 - 41443.718: 100.0000% ( 5) 00:22:13.746 00:22:13.746 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:22:13.746 ============================================================================== 00:22:13.746 Range in us Cumulative IO count 00:22:13.746 8301.227 - 8363.642: 0.0164% ( 2) 00:22:13.746 8363.642 - 8426.057: 0.2549% ( 29) 00:22:13.746 8426.057 - 8488.472: 0.6086% ( 43) 00:22:13.746 8488.472 - 8550.888: 1.1513% ( 66) 00:22:13.746 8550.888 - 8613.303: 2.1135% ( 117) 00:22:13.746 8613.303 - 8675.718: 3.2155% ( 134) 00:22:13.746 8675.718 - 8738.133: 4.6217% ( 171) 00:22:13.746 8738.133 - 8800.549: 6.2911% ( 203) 00:22:13.746 8800.549 - 8862.964: 8.0839% ( 218) 00:22:13.746 8862.964 - 8925.379: 10.0493% ( 239) 00:22:13.746 8925.379 - 8987.794: 12.4836% ( 296) 00:22:13.746 8987.794 - 9050.210: 15.1727% ( 327) 00:22:13.746 9050.210 - 9112.625: 18.0181% ( 346) 00:22:13.746 9112.625 - 9175.040: 20.9046% ( 351) 00:22:13.746 9175.040 - 9237.455: 23.9720% ( 373) 00:22:13.746 9237.455 - 9299.870: 27.1628% ( 388) 00:22:13.746 9299.870 - 9362.286: 30.5345% ( 410) 00:22:13.746 9362.286 - 9424.701: 33.6102% ( 374) 00:22:13.746 9424.701 - 9487.116: 36.6694% ( 372) 00:22:13.746 9487.116 - 9549.531: 39.6299% ( 360) 00:22:13.746 9549.531 - 9611.947: 42.2862% ( 323) 00:22:13.746 9611.947 - 9674.362: 45.0411% ( 335) 00:22:13.746 9674.362 - 9736.777: 47.5740% ( 308) 00:22:13.746 9736.777 - 9799.192: 50.0411% ( 300) 00:22:13.746 9799.192 - 9861.608: 52.3602% ( 282) 00:22:13.746 9861.608 - 9924.023: 54.4984% ( 260) 00:22:13.746 9924.023 - 9986.438: 56.5543% ( 250) 00:22:13.746 9986.438 - 10048.853: 58.8322% ( 277) 00:22:13.746 10048.853 - 10111.269: 61.2089% ( 289) 00:22:13.746 10111.269 - 10173.684: 63.3635% ( 262) 00:22:13.746 10173.684 - 10236.099: 65.3207% ( 238) 00:22:13.746 10236.099 - 10298.514: 67.2122% ( 230) 00:22:13.746 10298.514 - 10360.930: 68.9885% ( 216) 00:22:13.746 10360.930 - 10423.345: 70.5592% ( 191) 00:22:13.746 10423.345 - 10485.760: 72.0477% ( 181) 00:22:13.746 10485.760 - 10548.175: 73.3388% ( 157) 00:22:13.746 10548.175 - 10610.590: 74.5395% ( 146) 00:22:13.746 10610.590 - 10673.006: 75.7401% ( 146) 00:22:13.746 10673.006 - 10735.421: 76.9655% ( 149) 00:22:13.746 10735.421 - 10797.836: 78.1332% ( 142) 00:22:13.746 10797.836 - 10860.251: 79.2681% ( 138) 00:22:13.746 10860.251 - 10922.667: 80.3783% ( 135) 00:22:13.746 10922.667 - 10985.082: 81.3487% ( 118) 00:22:13.746 10985.082 - 11047.497: 82.2039% ( 104) 00:22:13.746 11047.497 - 11109.912: 83.0839% ( 107) 00:22:13.746 11109.912 - 11172.328: 83.9803% ( 109) 00:22:13.746 11172.328 - 11234.743: 84.8602% ( 107) 00:22:13.746 11234.743 - 11297.158: 85.6990% ( 102) 00:22:13.746 11297.158 - 11359.573: 86.5789% ( 107) 00:22:13.746 11359.573 - 11421.989: 87.3520% ( 94) 00:22:13.746 11421.989 - 11484.404: 88.0674% ( 87) 00:22:13.746 11484.404 - 11546.819: 88.5691% ( 61) 00:22:13.746 11546.819 - 11609.234: 88.9309% ( 44) 00:22:13.746 11609.234 - 11671.650: 89.2352% ( 37) 00:22:13.746 11671.650 - 11734.065: 89.4243% ( 23) 00:22:13.746 11734.065 - 11796.480: 89.5724% ( 18) 00:22:13.746 11796.480 - 11858.895: 89.7286% ( 19) 00:22:13.746 11858.895 - 11921.310: 89.8520% ( 15) 00:22:13.746 11921.310 - 11983.726: 89.9918% ( 17) 00:22:13.746 11983.726 - 12046.141: 90.1234% ( 16) 00:22:13.746 12046.141 - 12108.556: 90.3043% ( 22) 00:22:13.746 12108.556 - 12170.971: 90.5181% ( 26) 00:22:13.746 12170.971 - 12233.387: 90.6908% ( 21) 00:22:13.746 12233.387 - 12295.802: 90.8306% ( 17) 00:22:13.746 12295.802 - 12358.217: 90.9868% ( 19) 00:22:13.746 12358.217 - 12420.632: 91.1431% ( 19) 00:22:13.746 12420.632 - 12483.048: 91.2747% ( 16) 00:22:13.746 12483.048 - 12545.463: 91.3816% ( 13) 00:22:13.746 12545.463 - 12607.878: 91.4967% ( 14) 00:22:13.746 12607.878 - 12670.293: 91.6201% ( 15) 00:22:13.746 12670.293 - 12732.709: 91.7516% ( 16) 00:22:13.746 12732.709 - 12795.124: 91.8668% ( 14) 00:22:13.746 12795.124 - 12857.539: 91.9819% ( 14) 00:22:13.746 12857.539 - 12919.954: 92.1053% ( 15) 00:22:13.746 12919.954 - 12982.370: 92.2533% ( 18) 00:22:13.746 12982.370 - 13044.785: 92.3766% ( 15) 00:22:13.746 13044.785 - 13107.200: 92.5411% ( 20) 00:22:13.746 13107.200 - 13169.615: 92.7056% ( 20) 00:22:13.746 13169.615 - 13232.030: 92.9112% ( 25) 00:22:13.746 13232.030 - 13294.446: 93.0839% ( 21) 00:22:13.746 13294.446 - 13356.861: 93.2319% ( 18) 00:22:13.746 13356.861 - 13419.276: 93.3717% ( 17) 00:22:13.746 13419.276 - 13481.691: 93.5609% ( 23) 00:22:13.746 13481.691 - 13544.107: 93.7089% ( 18) 00:22:13.746 13544.107 - 13606.522: 93.8651% ( 19) 00:22:13.746 13606.522 - 13668.937: 94.0214% ( 19) 00:22:13.746 13668.937 - 13731.352: 94.1859% ( 20) 00:22:13.746 13731.352 - 13793.768: 94.3174% ( 16) 00:22:13.746 13793.768 - 13856.183: 94.4655% ( 18) 00:22:13.746 13856.183 - 13918.598: 94.6135% ( 18) 00:22:13.746 13918.598 - 13981.013: 94.7780% ( 20) 00:22:13.746 13981.013 - 14043.429: 94.9178% ( 17) 00:22:13.746 14043.429 - 14105.844: 95.0740% ( 19) 00:22:13.746 14105.844 - 14168.259: 95.2385% ( 20) 00:22:13.746 14168.259 - 14230.674: 95.3618% ( 15) 00:22:13.746 14230.674 - 14293.090: 95.4934% ( 16) 00:22:13.746 14293.090 - 14355.505: 95.6332% ( 17) 00:22:13.746 14355.505 - 14417.920: 95.7566% ( 15) 00:22:13.746 14417.920 - 14480.335: 95.8964% ( 17) 00:22:13.746 14480.335 - 14542.750: 95.9951% ( 12) 00:22:13.746 14542.750 - 14605.166: 96.0855% ( 11) 00:22:13.747 14605.166 - 14667.581: 96.1842% ( 12) 00:22:13.747 14667.581 - 14729.996: 96.2500% ( 8) 00:22:13.747 14729.996 - 14792.411: 96.2911% ( 5) 00:22:13.747 14792.411 - 14854.827: 96.3158% ( 3) 00:22:13.747 14854.827 - 14917.242: 96.3487% ( 4) 00:22:13.747 14917.242 - 14979.657: 96.3651% ( 2) 00:22:13.747 14979.657 - 15042.072: 96.3898% ( 3) 00:22:13.747 15042.072 - 15104.488: 96.4062% ( 2) 00:22:13.747 15104.488 - 15166.903: 96.4227% ( 2) 00:22:13.747 15166.903 - 15229.318: 96.4391% ( 2) 00:22:13.747 15229.318 - 15291.733: 96.4638% ( 3) 00:22:13.747 15291.733 - 15354.149: 96.4803% ( 2) 00:22:13.747 15354.149 - 15416.564: 96.4885% ( 1) 00:22:13.747 15416.564 - 15478.979: 96.5049% ( 2) 00:22:13.747 15478.979 - 15541.394: 96.5461% ( 5) 00:22:13.747 15541.394 - 15603.810: 96.6036% ( 7) 00:22:13.747 15603.810 - 15666.225: 96.6447% ( 5) 00:22:13.747 15666.225 - 15728.640: 96.6941% ( 6) 00:22:13.747 15728.640 - 15791.055: 96.7599% ( 8) 00:22:13.747 15791.055 - 15853.470: 96.8339% ( 9) 00:22:13.747 15853.470 - 15915.886: 96.8997% ( 8) 00:22:13.747 15915.886 - 15978.301: 96.9737% ( 9) 00:22:13.747 15978.301 - 16103.131: 97.1217% ( 18) 00:22:13.747 16103.131 - 16227.962: 97.2615% ( 17) 00:22:13.747 16227.962 - 16352.792: 97.4095% ( 18) 00:22:13.747 16352.792 - 16477.623: 97.5658% ( 19) 00:22:13.747 16477.623 - 16602.453: 97.6727% ( 13) 00:22:13.747 16602.453 - 16727.284: 97.7632% ( 11) 00:22:13.747 16727.284 - 16852.114: 97.8289% ( 8) 00:22:13.747 16852.114 - 16976.945: 97.8865% ( 7) 00:22:13.747 16976.945 - 17101.775: 97.8947% ( 1) 00:22:13.747 17850.758 - 17975.589: 97.9605% ( 8) 00:22:13.747 17975.589 - 18100.419: 98.0510% ( 11) 00:22:13.747 18100.419 - 18225.250: 98.1250% ( 9) 00:22:13.747 18225.250 - 18350.080: 98.2072% ( 10) 00:22:13.747 18350.080 - 18474.910: 98.2895% ( 10) 00:22:13.747 18474.910 - 18599.741: 98.3717% ( 10) 00:22:13.747 18599.741 - 18724.571: 98.4457% ( 9) 00:22:13.747 18724.571 - 18849.402: 98.5197% ( 9) 00:22:13.747 18849.402 - 18974.232: 98.5938% ( 9) 00:22:13.747 18974.232 - 19099.063: 98.6760% ( 10) 00:22:13.747 19099.063 - 19223.893: 98.7253% ( 6) 00:22:13.747 19223.893 - 19348.724: 98.7582% ( 4) 00:22:13.747 19348.724 - 19473.554: 98.7829% ( 3) 00:22:13.747 19473.554 - 19598.385: 98.8076% ( 3) 00:22:13.747 19598.385 - 19723.215: 98.8405% ( 4) 00:22:13.747 19723.215 - 19848.046: 98.8651% ( 3) 00:22:13.747 19848.046 - 19972.876: 98.8898% ( 3) 00:22:13.747 19972.876 - 20097.707: 98.9145% ( 3) 00:22:13.747 20097.707 - 20222.537: 98.9474% ( 4) 00:22:13.747 24966.095 - 25090.926: 98.9556% ( 1) 00:22:13.747 25090.926 - 25215.756: 98.9638% ( 1) 00:22:13.747 25215.756 - 25340.587: 98.9967% ( 4) 00:22:13.747 25340.587 - 25465.417: 99.0132% ( 2) 00:22:13.747 25465.417 - 25590.248: 99.0296% ( 2) 00:22:13.747 25590.248 - 25715.078: 99.0461% ( 2) 00:22:13.747 25715.078 - 25839.909: 99.0707% ( 3) 00:22:13.747 25839.909 - 25964.739: 99.0872% ( 2) 00:22:13.747 25964.739 - 26089.570: 99.1118% ( 3) 00:22:13.747 26089.570 - 26214.400: 99.1283% ( 2) 00:22:13.747 26214.400 - 26339.230: 99.1447% ( 2) 00:22:13.747 26339.230 - 26464.061: 99.1694% ( 3) 00:22:13.747 26464.061 - 26588.891: 99.1859% ( 2) 00:22:13.747 26588.891 - 26713.722: 99.2105% ( 3) 00:22:13.747 26713.722 - 26838.552: 99.2270% ( 2) 00:22:13.747 26838.552 - 26963.383: 99.2434% ( 2) 00:22:13.747 26963.383 - 27088.213: 99.2681% ( 3) 00:22:13.747 27088.213 - 27213.044: 99.2928% ( 3) 00:22:13.747 27213.044 - 27337.874: 99.3092% ( 2) 00:22:13.747 27337.874 - 27462.705: 99.3257% ( 2) 00:22:13.747 27462.705 - 27587.535: 99.3503% ( 3) 00:22:13.747 27587.535 - 27712.366: 99.3750% ( 3) 00:22:13.747 27712.366 - 27837.196: 99.3914% ( 2) 00:22:13.747 27837.196 - 27962.027: 99.4161% ( 3) 00:22:13.747 27962.027 - 28086.857: 99.4326% ( 2) 00:22:13.747 28086.857 - 28211.688: 99.4572% ( 3) 00:22:13.747 28211.688 - 28336.518: 99.4737% ( 2) 00:22:13.747 34453.211 - 34702.872: 99.4901% ( 2) 00:22:13.747 34702.872 - 34952.533: 99.5312% ( 5) 00:22:13.747 34952.533 - 35202.194: 99.5641% ( 4) 00:22:13.747 35202.194 - 35451.855: 99.6053% ( 5) 00:22:13.747 35451.855 - 35701.516: 99.6464% ( 5) 00:22:13.747 35701.516 - 35951.177: 99.6957% ( 6) 00:22:13.747 35951.177 - 36200.838: 99.7368% ( 5) 00:22:13.747 36200.838 - 36450.499: 99.7780% ( 5) 00:22:13.747 36450.499 - 36700.160: 99.8191% ( 5) 00:22:13.747 36700.160 - 36949.821: 99.8684% ( 6) 00:22:13.747 36949.821 - 37199.482: 99.9178% ( 6) 00:22:13.747 37199.482 - 37449.143: 99.9589% ( 5) 00:22:13.747 37449.143 - 37698.804: 100.0000% ( 5) 00:22:13.747 00:22:13.747 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:22:13.747 ============================================================================== 00:22:13.747 Range in us Cumulative IO count 00:22:13.747 8363.642 - 8426.057: 0.2056% ( 25) 00:22:13.747 8426.057 - 8488.472: 0.5428% ( 41) 00:22:13.747 8488.472 - 8550.888: 1.1349% ( 72) 00:22:13.747 8550.888 - 8613.303: 2.0395% ( 110) 00:22:13.747 8613.303 - 8675.718: 3.2566% ( 148) 00:22:13.747 8675.718 - 8738.133: 4.6135% ( 165) 00:22:13.747 8738.133 - 8800.549: 6.2993% ( 205) 00:22:13.747 8800.549 - 8862.964: 8.0510% ( 213) 00:22:13.747 8862.964 - 8925.379: 10.2303% ( 265) 00:22:13.747 8925.379 - 8987.794: 12.4260% ( 267) 00:22:13.747 8987.794 - 9050.210: 15.0658% ( 321) 00:22:13.747 9050.210 - 9112.625: 17.7385% ( 325) 00:22:13.747 9112.625 - 9175.040: 20.8964% ( 384) 00:22:13.747 9175.040 - 9237.455: 24.1447% ( 395) 00:22:13.747 9237.455 - 9299.870: 27.2286% ( 375) 00:22:13.747 9299.870 - 9362.286: 30.5921% ( 409) 00:22:13.747 9362.286 - 9424.701: 33.7993% ( 390) 00:22:13.747 9424.701 - 9487.116: 36.7763% ( 362) 00:22:13.747 9487.116 - 9549.531: 39.8191% ( 370) 00:22:13.747 9549.531 - 9611.947: 42.7467% ( 356) 00:22:13.747 9611.947 - 9674.362: 45.4359% ( 327) 00:22:13.747 9674.362 - 9736.777: 47.9030% ( 300) 00:22:13.747 9736.777 - 9799.192: 50.3701% ( 300) 00:22:13.747 9799.192 - 9861.608: 52.7632% ( 291) 00:22:13.747 9861.608 - 9924.023: 54.9095% ( 261) 00:22:13.747 9924.023 - 9986.438: 56.9572% ( 249) 00:22:13.747 9986.438 - 10048.853: 59.1612% ( 268) 00:22:13.747 10048.853 - 10111.269: 61.5214% ( 287) 00:22:13.747 10111.269 - 10173.684: 63.7664% ( 273) 00:22:13.747 10173.684 - 10236.099: 65.6908% ( 234) 00:22:13.747 10236.099 - 10298.514: 67.4918% ( 219) 00:22:13.747 10298.514 - 10360.930: 69.1447% ( 201) 00:22:13.747 10360.930 - 10423.345: 70.6826% ( 187) 00:22:13.747 10423.345 - 10485.760: 72.0148% ( 162) 00:22:13.747 10485.760 - 10548.175: 73.2730% ( 153) 00:22:13.747 10548.175 - 10610.590: 74.5066% ( 150) 00:22:13.747 10610.590 - 10673.006: 75.6661% ( 141) 00:22:13.747 10673.006 - 10735.421: 76.8421% ( 143) 00:22:13.747 10735.421 - 10797.836: 77.9441% ( 134) 00:22:13.747 10797.836 - 10860.251: 79.0049% ( 129) 00:22:13.747 10860.251 - 10922.667: 80.0905% ( 132) 00:22:13.747 10922.667 - 10985.082: 81.1020% ( 123) 00:22:13.747 10985.082 - 11047.497: 81.9901% ( 108) 00:22:13.747 11047.497 - 11109.912: 82.9194% ( 113) 00:22:13.747 11109.912 - 11172.328: 83.8898% ( 118) 00:22:13.747 11172.328 - 11234.743: 84.7862% ( 109) 00:22:13.747 11234.743 - 11297.158: 85.5592% ( 94) 00:22:13.747 11297.158 - 11359.573: 86.3569% ( 97) 00:22:13.747 11359.573 - 11421.989: 87.1793% ( 100) 00:22:13.747 11421.989 - 11484.404: 87.8372% ( 80) 00:22:13.747 11484.404 - 11546.819: 88.3882% ( 67) 00:22:13.747 11546.819 - 11609.234: 88.8487% ( 56) 00:22:13.747 11609.234 - 11671.650: 89.2023% ( 43) 00:22:13.747 11671.650 - 11734.065: 89.4655% ( 32) 00:22:13.747 11734.065 - 11796.480: 89.6875% ( 27) 00:22:13.747 11796.480 - 11858.895: 89.9095% ( 27) 00:22:13.747 11858.895 - 11921.310: 90.0822% ( 21) 00:22:13.747 11921.310 - 11983.726: 90.2549% ( 21) 00:22:13.747 11983.726 - 12046.141: 90.4276% ( 21) 00:22:13.747 12046.141 - 12108.556: 90.6003% ( 21) 00:22:13.747 12108.556 - 12170.971: 90.7484% ( 18) 00:22:13.747 12170.971 - 12233.387: 90.8882% ( 17) 00:22:13.747 12233.387 - 12295.802: 91.0115% ( 15) 00:22:13.747 12295.802 - 12358.217: 91.1102% ( 12) 00:22:13.747 12358.217 - 12420.632: 91.2418% ( 16) 00:22:13.747 12420.632 - 12483.048: 91.3734% ( 16) 00:22:13.747 12483.048 - 12545.463: 91.4885% ( 14) 00:22:13.747 12545.463 - 12607.878: 91.6283% ( 17) 00:22:13.747 12607.878 - 12670.293: 91.7599% ( 16) 00:22:13.747 12670.293 - 12732.709: 91.8586% ( 12) 00:22:13.747 12732.709 - 12795.124: 91.9572% ( 12) 00:22:13.747 12795.124 - 12857.539: 92.0641% ( 13) 00:22:13.747 12857.539 - 12919.954: 92.1546% ( 11) 00:22:13.747 12919.954 - 12982.370: 92.2204% ( 8) 00:22:13.747 12982.370 - 13044.785: 92.3026% ( 10) 00:22:13.747 13044.785 - 13107.200: 92.3849% ( 10) 00:22:13.747 13107.200 - 13169.615: 92.4918% ( 13) 00:22:13.747 13169.615 - 13232.030: 92.6151% ( 15) 00:22:13.747 13232.030 - 13294.446: 92.7632% ( 18) 00:22:13.747 13294.446 - 13356.861: 92.9194% ( 19) 00:22:13.747 13356.861 - 13419.276: 93.0757% ( 19) 00:22:13.747 13419.276 - 13481.691: 93.2319% ( 19) 00:22:13.747 13481.691 - 13544.107: 93.3882% ( 19) 00:22:13.747 13544.107 - 13606.522: 93.5526% ( 20) 00:22:13.747 13606.522 - 13668.937: 93.7253% ( 21) 00:22:13.747 13668.937 - 13731.352: 93.8898% ( 20) 00:22:13.747 13731.352 - 13793.768: 94.0707% ( 22) 00:22:13.747 13793.768 - 13856.183: 94.2270% ( 19) 00:22:13.747 13856.183 - 13918.598: 94.3997% ( 21) 00:22:13.747 13918.598 - 13981.013: 94.5395% ( 17) 00:22:13.747 13981.013 - 14043.429: 94.6875% ( 18) 00:22:13.747 14043.429 - 14105.844: 94.8520% ( 20) 00:22:13.747 14105.844 - 14168.259: 94.9918% ( 17) 00:22:13.747 14168.259 - 14230.674: 95.1234% ( 16) 00:22:13.747 14230.674 - 14293.090: 95.2632% ( 17) 00:22:13.747 14293.090 - 14355.505: 95.3783% ( 14) 00:22:13.747 14355.505 - 14417.920: 95.4934% ( 14) 00:22:13.747 14417.920 - 14480.335: 95.6168% ( 15) 00:22:13.747 14480.335 - 14542.750: 95.7237% ( 13) 00:22:13.747 14542.750 - 14605.166: 95.8306% ( 13) 00:22:13.747 14605.166 - 14667.581: 95.9211% ( 11) 00:22:13.747 14667.581 - 14729.996: 96.0033% ( 10) 00:22:13.747 14729.996 - 14792.411: 96.1020% ( 12) 00:22:13.747 14792.411 - 14854.827: 96.1924% ( 11) 00:22:13.747 14854.827 - 14917.242: 96.2829% ( 11) 00:22:13.747 14917.242 - 14979.657: 96.3569% ( 9) 00:22:13.747 14979.657 - 15042.072: 96.4309% ( 9) 00:22:13.747 15042.072 - 15104.488: 96.4967% ( 8) 00:22:13.748 15104.488 - 15166.903: 96.5789% ( 10) 00:22:13.748 15166.903 - 15229.318: 96.6530% ( 9) 00:22:13.748 15229.318 - 15291.733: 96.7270% ( 9) 00:22:13.748 15291.733 - 15354.149: 96.7928% ( 8) 00:22:13.748 15354.149 - 15416.564: 96.8503% ( 7) 00:22:13.748 15416.564 - 15478.979: 96.9079% ( 7) 00:22:13.748 15478.979 - 15541.394: 96.9655% ( 7) 00:22:13.748 15541.394 - 15603.810: 97.0148% ( 6) 00:22:13.748 15603.810 - 15666.225: 97.0559% ( 5) 00:22:13.748 15666.225 - 15728.640: 97.0970% ( 5) 00:22:13.748 15728.640 - 15791.055: 97.1464% ( 6) 00:22:13.748 15791.055 - 15853.470: 97.1875% ( 5) 00:22:13.748 15853.470 - 15915.886: 97.2368% ( 6) 00:22:13.748 15915.886 - 15978.301: 97.2697% ( 4) 00:22:13.748 15978.301 - 16103.131: 97.3026% ( 4) 00:22:13.748 16103.131 - 16227.962: 97.3438% ( 5) 00:22:13.748 16227.962 - 16352.792: 97.3684% ( 3) 00:22:13.748 16352.792 - 16477.623: 97.3849% ( 2) 00:22:13.748 16477.623 - 16602.453: 97.4424% ( 7) 00:22:13.748 16602.453 - 16727.284: 97.5000% ( 7) 00:22:13.748 16727.284 - 16852.114: 97.5493% ( 6) 00:22:13.748 16852.114 - 16976.945: 97.5905% ( 5) 00:22:13.748 16976.945 - 17101.775: 97.6480% ( 7) 00:22:13.748 17101.775 - 17226.606: 97.6974% ( 6) 00:22:13.748 17226.606 - 17351.436: 97.7549% ( 7) 00:22:13.748 17351.436 - 17476.267: 97.8043% ( 6) 00:22:13.748 17476.267 - 17601.097: 97.8536% ( 6) 00:22:13.748 17601.097 - 17725.928: 97.8947% ( 5) 00:22:13.748 17975.589 - 18100.419: 97.9359% ( 5) 00:22:13.748 18100.419 - 18225.250: 97.9852% ( 6) 00:22:13.748 18225.250 - 18350.080: 98.0345% ( 6) 00:22:13.748 18350.080 - 18474.910: 98.0839% ( 6) 00:22:13.748 18474.910 - 18599.741: 98.1332% ( 6) 00:22:13.748 18599.741 - 18724.571: 98.1826% ( 6) 00:22:13.748 18724.571 - 18849.402: 98.2319% ( 6) 00:22:13.748 18849.402 - 18974.232: 98.2730% ( 5) 00:22:13.748 18974.232 - 19099.063: 98.3717% ( 12) 00:22:13.748 19099.063 - 19223.893: 98.4375% ( 8) 00:22:13.748 19223.893 - 19348.724: 98.5115% ( 9) 00:22:13.748 19348.724 - 19473.554: 98.5362% ( 3) 00:22:13.748 19473.554 - 19598.385: 98.5691% ( 4) 00:22:13.748 19598.385 - 19723.215: 98.6020% ( 4) 00:22:13.748 19723.215 - 19848.046: 98.6349% ( 4) 00:22:13.748 19848.046 - 19972.876: 98.6595% ( 3) 00:22:13.748 19972.876 - 20097.707: 98.6760% ( 2) 00:22:13.748 20097.707 - 20222.537: 98.7089% ( 4) 00:22:13.748 20222.537 - 20347.368: 98.7336% ( 3) 00:22:13.748 20347.368 - 20472.198: 98.7582% ( 3) 00:22:13.748 20472.198 - 20597.029: 98.7911% ( 4) 00:22:13.748 20597.029 - 20721.859: 98.8158% ( 3) 00:22:13.748 20721.859 - 20846.690: 98.8487% ( 4) 00:22:13.748 20846.690 - 20971.520: 98.8734% ( 3) 00:22:13.748 20971.520 - 21096.350: 98.8980% ( 3) 00:22:13.748 21096.350 - 21221.181: 98.9309% ( 4) 00:22:13.748 21221.181 - 21346.011: 98.9556% ( 3) 00:22:13.748 21346.011 - 21470.842: 98.9638% ( 1) 00:22:13.748 21470.842 - 21595.672: 98.9967% ( 4) 00:22:13.748 21595.672 - 21720.503: 99.0132% ( 2) 00:22:13.748 21720.503 - 21845.333: 99.0296% ( 2) 00:22:13.748 21845.333 - 21970.164: 99.0625% ( 4) 00:22:13.748 21970.164 - 22094.994: 99.0789% ( 2) 00:22:13.748 22094.994 - 22219.825: 99.1036% ( 3) 00:22:13.748 22219.825 - 22344.655: 99.1201% ( 2) 00:22:13.748 22344.655 - 22469.486: 99.1447% ( 3) 00:22:13.748 22469.486 - 22594.316: 99.1694% ( 3) 00:22:13.748 22594.316 - 22719.147: 99.1859% ( 2) 00:22:13.748 22719.147 - 22843.977: 99.2023% ( 2) 00:22:13.748 22843.977 - 22968.808: 99.2270% ( 3) 00:22:13.748 22968.808 - 23093.638: 99.2352% ( 1) 00:22:13.748 23093.638 - 23218.469: 99.2516% ( 2) 00:22:13.748 23218.469 - 23343.299: 99.2763% ( 3) 00:22:13.748 23343.299 - 23468.130: 99.2928% ( 2) 00:22:13.748 23468.130 - 23592.960: 99.3174% ( 3) 00:22:13.748 23592.960 - 23717.790: 99.3339% ( 2) 00:22:13.748 23717.790 - 23842.621: 99.3586% ( 3) 00:22:13.748 23842.621 - 23967.451: 99.3832% ( 3) 00:22:13.748 23967.451 - 24092.282: 99.3997% ( 2) 00:22:13.748 24092.282 - 24217.112: 99.4243% ( 3) 00:22:13.748 24217.112 - 24341.943: 99.4408% ( 2) 00:22:13.748 24341.943 - 24466.773: 99.4655% ( 3) 00:22:13.748 24466.773 - 24591.604: 99.4737% ( 1) 00:22:13.748 30833.128 - 30957.958: 99.4819% ( 1) 00:22:13.748 30957.958 - 31082.789: 99.4984% ( 2) 00:22:13.748 31082.789 - 31207.619: 99.5230% ( 3) 00:22:13.748 31207.619 - 31332.450: 99.5477% ( 3) 00:22:13.748 31332.450 - 31457.280: 99.5724% ( 3) 00:22:13.748 31457.280 - 31582.110: 99.5888% ( 2) 00:22:13.748 31582.110 - 31706.941: 99.6135% ( 3) 00:22:13.748 31706.941 - 31831.771: 99.6382% ( 3) 00:22:13.748 31831.771 - 31956.602: 99.6546% ( 2) 00:22:13.748 31956.602 - 32206.263: 99.6957% ( 5) 00:22:13.748 32206.263 - 32455.924: 99.7451% ( 6) 00:22:13.748 32455.924 - 32705.585: 99.7862% ( 5) 00:22:13.748 32705.585 - 32955.246: 99.8273% ( 5) 00:22:13.748 32955.246 - 33204.907: 99.8766% ( 6) 00:22:13.748 33204.907 - 33454.568: 99.9178% ( 5) 00:22:13.748 33454.568 - 33704.229: 99.9589% ( 5) 00:22:13.748 33704.229 - 33953.890: 100.0000% ( 5) 00:22:13.748 00:22:13.748 18:49:42 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:22:15.128 Initializing NVMe Controllers 00:22:15.128 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:22:15.128 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:22:15.128 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:22:15.128 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:22:15.128 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:22:15.128 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:22:15.128 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:22:15.128 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:22:15.128 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:22:15.128 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:22:15.128 Initialization complete. Launching workers. 00:22:15.128 ======================================================== 00:22:15.128 Latency(us) 00:22:15.128 Device Information : IOPS MiB/s Average min max 00:22:15.128 PCIE (0000:00:10.0) NSID 1 from core 0: 6130.38 71.84 20912.61 9171.60 60847.10 00:22:15.128 PCIE (0000:00:11.0) NSID 1 from core 0: 6130.38 71.84 20841.48 9491.41 56574.55 00:22:15.128 PCIE (0000:00:13.0) NSID 1 from core 0: 6130.38 71.84 20771.62 9118.80 53868.17 00:22:15.128 PCIE (0000:00:12.0) NSID 1 from core 0: 6130.38 71.84 20701.29 9330.15 50176.66 00:22:15.128 PCIE (0000:00:12.0) NSID 2 from core 0: 6194.24 72.59 20406.80 9153.49 39850.35 00:22:15.128 PCIE (0000:00:12.0) NSID 3 from core 0: 6194.24 72.59 20338.38 9475.87 35942.82 00:22:15.128 ======================================================== 00:22:15.128 Total : 36910.02 432.54 20661.03 9118.80 60847.10 00:22:15.128 00:22:15.128 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:22:15.128 ================================================================================= 00:22:15.128 1.00000% : 9611.947us 00:22:15.128 10.00000% : 10173.684us 00:22:15.128 25.00000% : 11858.895us 00:22:15.128 50.00000% : 15354.149us 00:22:15.128 75.00000% : 29959.314us 00:22:15.128 90.00000% : 31582.110us 00:22:15.128 95.00000% : 32705.585us 00:22:15.128 98.00000% : 48184.564us 00:22:15.128 99.00000% : 57422.019us 00:22:15.128 99.50000% : 58919.985us 00:22:15.128 99.90000% : 60417.950us 00:22:15.128 99.99000% : 60917.272us 00:22:15.128 99.99900% : 60917.272us 00:22:15.128 99.99990% : 60917.272us 00:22:15.128 99.99999% : 60917.272us 00:22:15.128 00:22:15.128 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:22:15.128 ================================================================================= 00:22:15.128 1.00000% : 9736.777us 00:22:15.128 10.00000% : 10173.684us 00:22:15.128 25.00000% : 11858.895us 00:22:15.128 50.00000% : 15291.733us 00:22:15.128 75.00000% : 30333.806us 00:22:15.128 90.00000% : 31332.450us 00:22:15.128 95.00000% : 31956.602us 00:22:15.128 98.00000% : 44689.310us 00:22:15.128 99.00000% : 53926.766us 00:22:15.128 99.50000% : 55424.731us 00:22:15.128 99.90000% : 56423.375us 00:22:15.128 99.99000% : 56673.036us 00:22:15.128 99.99900% : 56673.036us 00:22:15.128 99.99990% : 56673.036us 00:22:15.128 99.99999% : 56673.036us 00:22:15.128 00:22:15.128 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:22:15.128 ================================================================================= 00:22:15.128 1.00000% : 9674.362us 00:22:15.128 10.00000% : 10236.099us 00:22:15.128 25.00000% : 11921.310us 00:22:15.128 50.00000% : 15291.733us 00:22:15.128 75.00000% : 30208.975us 00:22:15.128 90.00000% : 31207.619us 00:22:15.128 95.00000% : 32206.263us 00:22:15.128 98.00000% : 42192.701us 00:22:15.128 99.00000% : 51180.495us 00:22:15.128 99.50000% : 52678.461us 00:22:15.128 99.90000% : 53677.105us 00:22:15.128 99.99000% : 53926.766us 00:22:15.128 99.99900% : 53926.766us 00:22:15.128 99.99990% : 53926.766us 00:22:15.128 99.99999% : 53926.766us 00:22:15.128 00:22:15.128 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:22:15.128 ================================================================================= 00:22:15.128 1.00000% : 9611.947us 00:22:15.128 10.00000% : 10298.514us 00:22:15.128 25.00000% : 11858.895us 00:22:15.128 50.00000% : 15354.149us 00:22:15.128 75.00000% : 30208.975us 00:22:15.128 90.00000% : 31332.450us 00:22:15.128 95.00000% : 31956.602us 00:22:15.128 98.00000% : 38697.448us 00:22:15.128 99.00000% : 47435.581us 00:22:15.128 99.50000% : 48933.547us 00:22:15.128 99.90000% : 49932.190us 00:22:15.128 99.99000% : 50181.851us 00:22:15.128 99.99900% : 50181.851us 00:22:15.128 99.99990% : 50181.851us 00:22:15.128 99.99999% : 50181.851us 00:22:15.128 00:22:15.128 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:22:15.128 ================================================================================= 00:22:15.128 1.00000% : 9736.777us 00:22:15.128 10.00000% : 10298.514us 00:22:15.128 25.00000% : 11983.726us 00:22:15.128 50.00000% : 15229.318us 00:22:15.128 75.00000% : 30208.975us 00:22:15.128 90.00000% : 31082.789us 00:22:15.128 95.00000% : 31706.941us 00:22:15.128 98.00000% : 32705.585us 00:22:15.128 99.00000% : 36700.160us 00:22:15.128 99.50000% : 38198.126us 00:22:15.128 99.90000% : 39696.091us 00:22:15.128 99.99000% : 39945.752us 00:22:15.128 99.99900% : 39945.752us 00:22:15.128 99.99990% : 39945.752us 00:22:15.128 99.99999% : 39945.752us 00:22:15.128 00:22:15.128 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:22:15.128 ================================================================================= 00:22:15.128 1.00000% : 9799.192us 00:22:15.128 10.00000% : 10173.684us 00:22:15.128 25.00000% : 11858.895us 00:22:15.128 50.00000% : 15291.733us 00:22:15.128 75.00000% : 30084.145us 00:22:15.128 90.00000% : 31082.789us 00:22:15.128 95.00000% : 31831.771us 00:22:15.128 98.00000% : 32705.585us 00:22:15.128 99.00000% : 33704.229us 00:22:15.128 99.50000% : 34702.872us 00:22:15.128 99.90000% : 35701.516us 00:22:15.128 99.99000% : 35951.177us 00:22:15.128 99.99900% : 35951.177us 00:22:15.128 99.99990% : 35951.177us 00:22:15.128 99.99999% : 35951.177us 00:22:15.128 00:22:15.128 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:22:15.128 ============================================================================== 00:22:15.128 Range in us Cumulative IO count 00:22:15.128 9112.625 - 9175.040: 0.0163% ( 1) 00:22:15.128 9175.040 - 9237.455: 0.0326% ( 1) 00:22:15.128 9237.455 - 9299.870: 0.0814% ( 3) 00:22:15.128 9299.870 - 9362.286: 0.1302% ( 3) 00:22:15.128 9362.286 - 9424.701: 0.4069% ( 17) 00:22:15.128 9424.701 - 9487.116: 0.7161% ( 19) 00:22:15.128 9487.116 - 9549.531: 0.9603% ( 15) 00:22:15.128 9549.531 - 9611.947: 1.7578% ( 49) 00:22:15.128 9611.947 - 9674.362: 2.4089% ( 40) 00:22:15.128 9674.362 - 9736.777: 3.1738% ( 47) 00:22:15.128 9736.777 - 9799.192: 4.4434% ( 78) 00:22:15.128 9799.192 - 9861.608: 5.4688% ( 63) 00:22:15.128 9861.608 - 9924.023: 7.0964% ( 100) 00:22:15.128 9924.023 - 9986.438: 7.9590% ( 53) 00:22:15.128 9986.438 - 10048.853: 9.0332% ( 66) 00:22:15.128 10048.853 - 10111.269: 9.8796% ( 52) 00:22:15.128 10111.269 - 10173.684: 10.7910% ( 56) 00:22:15.128 10173.684 - 10236.099: 11.6211% ( 51) 00:22:15.128 10236.099 - 10298.514: 12.1094% ( 30) 00:22:15.128 10298.514 - 10360.930: 12.9557% ( 52) 00:22:15.128 10360.930 - 10423.345: 13.4928% ( 33) 00:22:15.128 10423.345 - 10485.760: 14.2090% ( 44) 00:22:15.128 10485.760 - 10548.175: 15.1204% ( 56) 00:22:15.128 10548.175 - 10610.590: 16.3411% ( 75) 00:22:15.128 10610.590 - 10673.006: 17.1875% ( 52) 00:22:15.128 10673.006 - 10735.421: 17.7897% ( 37) 00:22:15.128 10735.421 - 10797.836: 18.5710% ( 48) 00:22:15.128 10797.836 - 10860.251: 19.2871% ( 44) 00:22:15.128 10860.251 - 10922.667: 19.9544% ( 41) 00:22:15.128 10922.667 - 10985.082: 20.4590% ( 31) 00:22:15.128 10985.082 - 11047.497: 20.9961% ( 33) 00:22:15.128 11047.497 - 11109.912: 21.2891% ( 18) 00:22:15.128 11109.912 - 11172.328: 21.4681% ( 11) 00:22:15.128 11172.328 - 11234.743: 21.6309% ( 10) 00:22:15.128 11234.743 - 11297.158: 21.7448% ( 7) 00:22:15.128 11297.158 - 11359.573: 22.0378% ( 18) 00:22:15.128 11359.573 - 11421.989: 22.3145% ( 17) 00:22:15.128 11421.989 - 11484.404: 22.6725% ( 22) 00:22:15.128 11484.404 - 11546.819: 23.0957% ( 26) 00:22:15.128 11546.819 - 11609.234: 23.5189% ( 26) 00:22:15.128 11609.234 - 11671.650: 23.9746% ( 28) 00:22:15.128 11671.650 - 11734.065: 24.4141% ( 27) 00:22:15.128 11734.065 - 11796.480: 24.9349% ( 32) 00:22:15.128 11796.480 - 11858.895: 25.3906% ( 28) 00:22:15.128 11858.895 - 11921.310: 26.0905% ( 43) 00:22:15.128 11921.310 - 11983.726: 26.8717% ( 48) 00:22:15.128 11983.726 - 12046.141: 27.5228% ( 40) 00:22:15.128 12046.141 - 12108.556: 27.9134% ( 24) 00:22:15.128 12108.556 - 12170.971: 28.4342% ( 32) 00:22:15.128 12170.971 - 12233.387: 28.7109% ( 17) 00:22:15.128 12233.387 - 12295.802: 29.0527% ( 21) 00:22:15.128 12295.802 - 12358.217: 29.6061% ( 34) 00:22:15.128 12358.217 - 12420.632: 29.9154% ( 19) 00:22:15.128 12420.632 - 12483.048: 30.2897% ( 23) 00:22:15.128 12483.048 - 12545.463: 30.7780% ( 30) 00:22:15.128 12545.463 - 12607.878: 31.1198% ( 21) 00:22:15.128 12607.878 - 12670.293: 31.7708% ( 40) 00:22:15.128 12670.293 - 12732.709: 32.2917% ( 32) 00:22:15.128 12732.709 - 12795.124: 32.6497% ( 22) 00:22:15.128 12795.124 - 12857.539: 33.0241% ( 23) 00:22:15.128 12857.539 - 12919.954: 33.3496% ( 20) 00:22:15.128 12919.954 - 12982.370: 33.5449% ( 12) 00:22:15.128 12982.370 - 13044.785: 33.8379% ( 18) 00:22:15.128 13044.785 - 13107.200: 34.0820% ( 15) 00:22:15.128 13107.200 - 13169.615: 34.3750% ( 18) 00:22:15.128 13169.615 - 13232.030: 34.6680% ( 18) 00:22:15.128 13232.030 - 13294.446: 34.9935% ( 20) 00:22:15.128 13294.446 - 13356.861: 35.3353% ( 21) 00:22:15.128 13356.861 - 13419.276: 35.6934% ( 22) 00:22:15.128 13419.276 - 13481.691: 36.1491% ( 28) 00:22:15.129 13481.691 - 13544.107: 36.7025% ( 34) 00:22:15.129 13544.107 - 13606.522: 37.4186% ( 44) 00:22:15.129 13606.522 - 13668.937: 38.0859% ( 41) 00:22:15.129 13668.937 - 13731.352: 38.5742% ( 30) 00:22:15.129 13731.352 - 13793.768: 38.9323% ( 22) 00:22:15.129 13793.768 - 13856.183: 39.3555% ( 26) 00:22:15.129 13856.183 - 13918.598: 39.9414% ( 36) 00:22:15.129 13918.598 - 13981.013: 40.3971% ( 28) 00:22:15.129 13981.013 - 14043.429: 40.9993% ( 37) 00:22:15.129 14043.429 - 14105.844: 41.4388% ( 27) 00:22:15.129 14105.844 - 14168.259: 42.0247% ( 36) 00:22:15.129 14168.259 - 14230.674: 42.5781% ( 34) 00:22:15.129 14230.674 - 14293.090: 43.0664% ( 30) 00:22:15.129 14293.090 - 14355.505: 43.7337% ( 41) 00:22:15.129 14355.505 - 14417.920: 44.1732% ( 27) 00:22:15.129 14417.920 - 14480.335: 44.6452% ( 29) 00:22:15.129 14480.335 - 14542.750: 45.1497% ( 31) 00:22:15.129 14542.750 - 14605.166: 45.7194% ( 35) 00:22:15.129 14605.166 - 14667.581: 46.2077% ( 30) 00:22:15.129 14667.581 - 14729.996: 46.5658% ( 22) 00:22:15.129 14729.996 - 14792.411: 47.0052% ( 27) 00:22:15.129 14792.411 - 14854.827: 47.3145% ( 19) 00:22:15.129 14854.827 - 14917.242: 47.6725% ( 22) 00:22:15.129 14917.242 - 14979.657: 47.9492% ( 17) 00:22:15.129 14979.657 - 15042.072: 48.3398% ( 24) 00:22:15.129 15042.072 - 15104.488: 48.6654% ( 20) 00:22:15.129 15104.488 - 15166.903: 48.9583% ( 18) 00:22:15.129 15166.903 - 15229.318: 49.3001% ( 21) 00:22:15.129 15229.318 - 15291.733: 49.6908% ( 24) 00:22:15.129 15291.733 - 15354.149: 50.1628% ( 29) 00:22:15.129 15354.149 - 15416.564: 50.6185% ( 28) 00:22:15.129 15416.564 - 15478.979: 50.8626% ( 15) 00:22:15.129 15478.979 - 15541.394: 51.1068% ( 15) 00:22:15.129 15541.394 - 15603.810: 51.3021% ( 12) 00:22:15.129 15603.810 - 15666.225: 51.5788% ( 17) 00:22:15.129 15666.225 - 15728.640: 51.7253% ( 9) 00:22:15.129 15728.640 - 15791.055: 51.8717% ( 9) 00:22:15.129 15791.055 - 15853.470: 52.0996% ( 14) 00:22:15.129 15853.470 - 15915.886: 52.2786% ( 11) 00:22:15.129 15915.886 - 15978.301: 52.4089% ( 8) 00:22:15.129 15978.301 - 16103.131: 52.7181% ( 19) 00:22:15.129 16103.131 - 16227.962: 52.9785% ( 16) 00:22:15.129 16227.962 - 16352.792: 53.2552% ( 17) 00:22:15.129 16352.792 - 16477.623: 53.4180% ( 10) 00:22:15.129 16477.623 - 16602.453: 53.5807% ( 10) 00:22:15.129 16602.453 - 16727.284: 53.6458% ( 4) 00:22:15.129 16727.284 - 16852.114: 53.7760% ( 8) 00:22:15.129 16852.114 - 16976.945: 53.8574% ( 5) 00:22:15.129 16976.945 - 17101.775: 53.9062% ( 3) 00:22:15.129 17101.775 - 17226.606: 53.9225% ( 1) 00:22:15.129 17226.606 - 17351.436: 53.9551% ( 2) 00:22:15.129 17351.436 - 17476.267: 53.9714% ( 1) 00:22:15.129 17476.267 - 17601.097: 54.0202% ( 3) 00:22:15.129 17601.097 - 17725.928: 54.0527% ( 2) 00:22:15.129 17725.928 - 17850.758: 54.0853% ( 2) 00:22:15.129 17850.758 - 17975.589: 54.1016% ( 1) 00:22:15.129 17975.589 - 18100.419: 54.1504% ( 3) 00:22:15.129 18100.419 - 18225.250: 54.1667% ( 1) 00:22:15.129 19099.063 - 19223.893: 54.1829% ( 1) 00:22:15.129 19223.893 - 19348.724: 54.2480% ( 4) 00:22:15.129 19348.724 - 19473.554: 54.2806% ( 2) 00:22:15.129 19473.554 - 19598.385: 54.3457% ( 4) 00:22:15.129 19598.385 - 19723.215: 54.3945% ( 3) 00:22:15.129 19723.215 - 19848.046: 54.4271% ( 2) 00:22:15.129 19848.046 - 19972.876: 54.4922% ( 4) 00:22:15.129 19972.876 - 20097.707: 54.5898% ( 6) 00:22:15.129 20097.707 - 20222.537: 54.7038% ( 7) 00:22:15.129 20222.537 - 20347.368: 54.7526% ( 3) 00:22:15.129 20347.368 - 20472.198: 54.8014% ( 3) 00:22:15.129 20472.198 - 20597.029: 54.8503% ( 3) 00:22:15.129 20597.029 - 20721.859: 54.8665% ( 1) 00:22:15.129 20721.859 - 20846.690: 54.8991% ( 2) 00:22:15.129 20846.690 - 20971.520: 54.9154% ( 1) 00:22:15.129 20971.520 - 21096.350: 54.9479% ( 2) 00:22:15.129 21096.350 - 21221.181: 54.9805% ( 2) 00:22:15.129 21595.672 - 21720.503: 54.9967% ( 1) 00:22:15.129 21720.503 - 21845.333: 55.0456% ( 3) 00:22:15.129 21845.333 - 21970.164: 55.0781% ( 2) 00:22:15.129 21970.164 - 22094.994: 55.1107% ( 2) 00:22:15.129 22094.994 - 22219.825: 55.1432% ( 2) 00:22:15.129 22219.825 - 22344.655: 55.1758% ( 2) 00:22:15.129 22344.655 - 22469.486: 55.2083% ( 2) 00:22:15.129 25715.078 - 25839.909: 55.2409% ( 2) 00:22:15.129 25839.909 - 25964.739: 55.3060% ( 4) 00:22:15.129 25964.739 - 26089.570: 55.3711% ( 4) 00:22:15.129 26089.570 - 26214.400: 55.4199% ( 3) 00:22:15.129 26214.400 - 26339.230: 55.5827% ( 10) 00:22:15.129 26339.230 - 26464.061: 55.6152% ( 2) 00:22:15.129 26464.061 - 26588.891: 55.6641% ( 3) 00:22:15.129 26588.891 - 26713.722: 55.6966% ( 2) 00:22:15.129 26713.722 - 26838.552: 55.7292% ( 2) 00:22:15.129 26838.552 - 26963.383: 55.7943% ( 4) 00:22:15.129 26963.383 - 27088.213: 55.8105% ( 1) 00:22:15.129 27088.213 - 27213.044: 55.8594% ( 3) 00:22:15.129 27213.044 - 27337.874: 55.9570% ( 6) 00:22:15.129 27337.874 - 27462.705: 56.0221% ( 4) 00:22:15.129 27462.705 - 27587.535: 56.1361% ( 7) 00:22:15.129 27587.535 - 27712.366: 56.3151% ( 11) 00:22:15.129 27712.366 - 27837.196: 56.4779% ( 10) 00:22:15.129 27837.196 - 27962.027: 56.7220% ( 15) 00:22:15.129 27962.027 - 28086.857: 57.0150% ( 18) 00:22:15.129 28086.857 - 28211.688: 57.3568% ( 21) 00:22:15.129 28211.688 - 28336.518: 57.8613% ( 31) 00:22:15.129 28336.518 - 28461.349: 58.4635% ( 37) 00:22:15.129 28461.349 - 28586.179: 59.2122% ( 46) 00:22:15.129 28586.179 - 28711.010: 59.9284% ( 44) 00:22:15.129 28711.010 - 28835.840: 60.9863% ( 65) 00:22:15.129 28835.840 - 28960.670: 62.4186% ( 88) 00:22:15.129 28960.670 - 29085.501: 63.6068% ( 73) 00:22:15.129 29085.501 - 29210.331: 64.8763% ( 78) 00:22:15.129 29210.331 - 29335.162: 66.5527% ( 103) 00:22:15.129 29335.162 - 29459.992: 68.2129% ( 102) 00:22:15.129 29459.992 - 29584.823: 70.1823% ( 121) 00:22:15.129 29584.823 - 29709.653: 72.0703% ( 116) 00:22:15.129 29709.653 - 29834.484: 73.8932% ( 112) 00:22:15.129 29834.484 - 29959.314: 75.6510% ( 108) 00:22:15.129 29959.314 - 30084.145: 77.2298% ( 97) 00:22:15.129 30084.145 - 30208.975: 78.7760% ( 95) 00:22:15.129 30208.975 - 30333.806: 80.0130% ( 76) 00:22:15.129 30333.806 - 30458.636: 81.2012% ( 73) 00:22:15.129 30458.636 - 30583.467: 82.5033% ( 80) 00:22:15.129 30583.467 - 30708.297: 83.7565% ( 77) 00:22:15.129 30708.297 - 30833.128: 84.8796% ( 69) 00:22:15.129 30833.128 - 30957.958: 85.9049% ( 63) 00:22:15.129 30957.958 - 31082.789: 86.9466% ( 64) 00:22:15.129 31082.789 - 31207.619: 88.0371% ( 67) 00:22:15.129 31207.619 - 31332.450: 88.8021% ( 47) 00:22:15.129 31332.450 - 31457.280: 89.5671% ( 47) 00:22:15.129 31457.280 - 31582.110: 90.2181% ( 40) 00:22:15.129 31582.110 - 31706.941: 90.8854% ( 41) 00:22:15.129 31706.941 - 31831.771: 91.5365% ( 40) 00:22:15.129 31831.771 - 31956.602: 92.1224% ( 36) 00:22:15.129 31956.602 - 32206.263: 93.3268% ( 74) 00:22:15.129 32206.263 - 32455.924: 94.3685% ( 64) 00:22:15.129 32455.924 - 32705.585: 95.2799% ( 56) 00:22:15.129 32705.585 - 32955.246: 95.8984% ( 38) 00:22:15.129 32955.246 - 33204.907: 96.4518% ( 34) 00:22:15.129 33204.907 - 33454.568: 96.7122% ( 16) 00:22:15.129 33454.568 - 33704.229: 96.9401% ( 14) 00:22:15.129 33704.229 - 33953.890: 97.1680% ( 14) 00:22:15.129 33953.890 - 34203.550: 97.4284% ( 16) 00:22:15.129 34203.550 - 34453.211: 97.6237% ( 12) 00:22:15.129 34453.211 - 34702.872: 97.8353% ( 13) 00:22:15.129 34702.872 - 34952.533: 97.9167% ( 5) 00:22:15.129 47685.242 - 47934.903: 97.9655% ( 3) 00:22:15.129 47934.903 - 48184.564: 98.0469% ( 5) 00:22:15.129 48184.564 - 48434.225: 98.1283% ( 5) 00:22:15.129 48434.225 - 48683.886: 98.1934% ( 4) 00:22:15.129 48683.886 - 48933.547: 98.2585% ( 4) 00:22:15.129 48933.547 - 49183.208: 98.3398% ( 5) 00:22:15.129 49183.208 - 49432.869: 98.4212% ( 5) 00:22:15.129 49432.869 - 49682.530: 98.4863% ( 4) 00:22:15.129 49682.530 - 49932.190: 98.5677% ( 5) 00:22:15.129 49932.190 - 50181.851: 98.6491% ( 5) 00:22:15.129 50181.851 - 50431.512: 98.7305% ( 5) 00:22:15.129 50431.512 - 50681.173: 98.8118% ( 5) 00:22:15.129 50681.173 - 50930.834: 98.8932% ( 5) 00:22:15.129 50930.834 - 51180.495: 98.9583% ( 4) 00:22:15.129 57172.358 - 57422.019: 99.0234% ( 4) 00:22:15.129 57422.019 - 57671.680: 99.1048% ( 5) 00:22:15.129 57671.680 - 57921.341: 99.2025% ( 6) 00:22:15.129 57921.341 - 58171.002: 99.2676% ( 4) 00:22:15.129 58171.002 - 58420.663: 99.3490% ( 5) 00:22:15.129 58420.663 - 58670.324: 99.4466% ( 6) 00:22:15.129 58670.324 - 58919.985: 99.5280% ( 5) 00:22:15.129 58919.985 - 59169.646: 99.5931% ( 4) 00:22:15.129 59169.646 - 59419.307: 99.6745% ( 5) 00:22:15.129 59419.307 - 59668.968: 99.7396% ( 4) 00:22:15.129 59668.968 - 59918.629: 99.8047% ( 4) 00:22:15.129 59918.629 - 60168.290: 99.8535% ( 3) 00:22:15.129 60168.290 - 60417.950: 99.9186% ( 4) 00:22:15.129 60417.950 - 60667.611: 99.9837% ( 4) 00:22:15.129 60667.611 - 60917.272: 100.0000% ( 1) 00:22:15.129 00:22:15.129 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:22:15.129 ============================================================================== 00:22:15.129 Range in us Cumulative IO count 00:22:15.129 9487.116 - 9549.531: 0.0163% ( 1) 00:22:15.129 9549.531 - 9611.947: 0.1628% ( 9) 00:22:15.129 9611.947 - 9674.362: 0.4232% ( 16) 00:22:15.129 9674.362 - 9736.777: 1.0579% ( 39) 00:22:15.129 9736.777 - 9799.192: 1.8392% ( 48) 00:22:15.129 9799.192 - 9861.608: 3.0762% ( 76) 00:22:15.129 9861.608 - 9924.023: 4.2155% ( 70) 00:22:15.129 9924.023 - 9986.438: 5.7292% ( 93) 00:22:15.129 9986.438 - 10048.853: 7.6172% ( 116) 00:22:15.129 10048.853 - 10111.269: 9.1146% ( 92) 00:22:15.129 10111.269 - 10173.684: 10.6934% ( 97) 00:22:15.129 10173.684 - 10236.099: 12.2396% ( 95) 00:22:15.129 10236.099 - 10298.514: 13.4440% ( 74) 00:22:15.129 10298.514 - 10360.930: 14.5020% ( 65) 00:22:15.129 10360.930 - 10423.345: 15.7878% ( 79) 00:22:15.129 10423.345 - 10485.760: 16.3411% ( 34) 00:22:15.129 10485.760 - 10548.175: 16.8945% ( 34) 00:22:15.129 10548.175 - 10610.590: 17.2852% ( 24) 00:22:15.129 10610.590 - 10673.006: 17.6595% ( 23) 00:22:15.129 10673.006 - 10735.421: 18.1803% ( 32) 00:22:15.129 10735.421 - 10797.836: 18.7988% ( 38) 00:22:15.129 10797.836 - 10860.251: 19.1243% ( 20) 00:22:15.129 10860.251 - 10922.667: 19.4336% ( 19) 00:22:15.130 10922.667 - 10985.082: 19.8242% ( 24) 00:22:15.130 10985.082 - 11047.497: 20.4264% ( 37) 00:22:15.130 11047.497 - 11109.912: 20.8008% ( 23) 00:22:15.130 11109.912 - 11172.328: 21.1263% ( 20) 00:22:15.130 11172.328 - 11234.743: 21.7285% ( 37) 00:22:15.130 11234.743 - 11297.158: 22.1191% ( 24) 00:22:15.130 11297.158 - 11359.573: 22.4935% ( 23) 00:22:15.130 11359.573 - 11421.989: 22.8841% ( 24) 00:22:15.130 11421.989 - 11484.404: 23.2910% ( 25) 00:22:15.130 11484.404 - 11546.819: 23.5026% ( 13) 00:22:15.130 11546.819 - 11609.234: 23.6165% ( 7) 00:22:15.130 11609.234 - 11671.650: 23.8118% ( 12) 00:22:15.130 11671.650 - 11734.065: 24.0723% ( 16) 00:22:15.130 11734.065 - 11796.480: 24.5280% ( 28) 00:22:15.130 11796.480 - 11858.895: 25.2116% ( 42) 00:22:15.130 11858.895 - 11921.310: 25.9115% ( 43) 00:22:15.130 11921.310 - 11983.726: 26.6602% ( 46) 00:22:15.130 11983.726 - 12046.141: 27.2949% ( 39) 00:22:15.130 12046.141 - 12108.556: 28.2715% ( 60) 00:22:15.130 12108.556 - 12170.971: 28.5319% ( 16) 00:22:15.130 12170.971 - 12233.387: 28.9062% ( 23) 00:22:15.130 12233.387 - 12295.802: 29.2643% ( 22) 00:22:15.130 12295.802 - 12358.217: 29.7038% ( 27) 00:22:15.130 12358.217 - 12420.632: 30.3385% ( 39) 00:22:15.130 12420.632 - 12483.048: 31.0384% ( 43) 00:22:15.130 12483.048 - 12545.463: 31.2500% ( 13) 00:22:15.130 12545.463 - 12607.878: 31.5592% ( 19) 00:22:15.130 12607.878 - 12670.293: 31.8685% ( 19) 00:22:15.130 12670.293 - 12732.709: 32.1126% ( 15) 00:22:15.130 12732.709 - 12795.124: 32.3568% ( 15) 00:22:15.130 12795.124 - 12857.539: 32.5684% ( 13) 00:22:15.130 12857.539 - 12919.954: 33.1543% ( 36) 00:22:15.130 12919.954 - 12982.370: 33.3496% ( 12) 00:22:15.130 12982.370 - 13044.785: 33.4961% ( 9) 00:22:15.130 13044.785 - 13107.200: 33.6914% ( 12) 00:22:15.130 13107.200 - 13169.615: 33.8542% ( 10) 00:22:15.130 13169.615 - 13232.030: 34.1146% ( 16) 00:22:15.130 13232.030 - 13294.446: 34.4401% ( 20) 00:22:15.130 13294.446 - 13356.861: 35.2051% ( 47) 00:22:15.130 13356.861 - 13419.276: 35.7910% ( 36) 00:22:15.130 13419.276 - 13481.691: 36.1165% ( 20) 00:22:15.130 13481.691 - 13544.107: 36.2956% ( 11) 00:22:15.130 13544.107 - 13606.522: 36.5072% ( 13) 00:22:15.130 13606.522 - 13668.937: 36.7025% ( 12) 00:22:15.130 13668.937 - 13731.352: 36.8978% ( 12) 00:22:15.130 13731.352 - 13793.768: 37.2884% ( 24) 00:22:15.130 13793.768 - 13856.183: 37.9395% ( 40) 00:22:15.130 13856.183 - 13918.598: 38.9648% ( 63) 00:22:15.130 13918.598 - 13981.013: 40.0228% ( 65) 00:22:15.130 13981.013 - 14043.429: 41.2435% ( 75) 00:22:15.130 14043.429 - 14105.844: 42.0410% ( 49) 00:22:15.130 14105.844 - 14168.259: 43.0501% ( 62) 00:22:15.130 14168.259 - 14230.674: 43.5384% ( 30) 00:22:15.130 14230.674 - 14293.090: 44.2383% ( 43) 00:22:15.130 14293.090 - 14355.505: 45.0521% ( 50) 00:22:15.130 14355.505 - 14417.920: 45.4915% ( 27) 00:22:15.130 14417.920 - 14480.335: 46.0286% ( 33) 00:22:15.130 14480.335 - 14542.750: 46.5495% ( 32) 00:22:15.130 14542.750 - 14605.166: 46.9564% ( 25) 00:22:15.130 14605.166 - 14667.581: 47.3307% ( 23) 00:22:15.130 14667.581 - 14729.996: 47.6074% ( 17) 00:22:15.130 14729.996 - 14792.411: 47.8516% ( 15) 00:22:15.130 14792.411 - 14854.827: 48.0632% ( 13) 00:22:15.130 14854.827 - 14917.242: 48.3236% ( 16) 00:22:15.130 14917.242 - 14979.657: 48.5677% ( 15) 00:22:15.130 14979.657 - 15042.072: 48.7467% ( 11) 00:22:15.130 15042.072 - 15104.488: 49.0072% ( 16) 00:22:15.130 15104.488 - 15166.903: 49.3164% ( 19) 00:22:15.130 15166.903 - 15229.318: 49.7070% ( 24) 00:22:15.130 15229.318 - 15291.733: 50.2116% ( 31) 00:22:15.130 15291.733 - 15354.149: 50.6185% ( 25) 00:22:15.130 15354.149 - 15416.564: 50.9440% ( 20) 00:22:15.130 15416.564 - 15478.979: 51.2533% ( 19) 00:22:15.130 15478.979 - 15541.394: 51.4974% ( 15) 00:22:15.130 15541.394 - 15603.810: 51.7415% ( 15) 00:22:15.130 15603.810 - 15666.225: 51.9694% ( 14) 00:22:15.130 15666.225 - 15728.640: 52.1647% ( 12) 00:22:15.130 15728.640 - 15791.055: 52.3763% ( 13) 00:22:15.130 15791.055 - 15853.470: 52.5553% ( 11) 00:22:15.130 15853.470 - 15915.886: 52.7669% ( 13) 00:22:15.130 15915.886 - 15978.301: 52.9948% ( 14) 00:22:15.130 15978.301 - 16103.131: 53.4342% ( 27) 00:22:15.130 16103.131 - 16227.962: 53.7272% ( 18) 00:22:15.130 16227.962 - 16352.792: 53.8411% ( 7) 00:22:15.130 16352.792 - 16477.623: 53.9062% ( 4) 00:22:15.130 16477.623 - 16602.453: 53.9714% ( 4) 00:22:15.130 16602.453 - 16727.284: 54.0527% ( 5) 00:22:15.130 16727.284 - 16852.114: 54.1341% ( 5) 00:22:15.130 16852.114 - 16976.945: 54.1667% ( 2) 00:22:15.130 20721.859 - 20846.690: 54.1829% ( 1) 00:22:15.130 20971.520 - 21096.350: 54.1992% ( 1) 00:22:15.130 21096.350 - 21221.181: 54.2318% ( 2) 00:22:15.130 21221.181 - 21346.011: 54.2643% ( 2) 00:22:15.130 21346.011 - 21470.842: 54.2969% ( 2) 00:22:15.130 21470.842 - 21595.672: 54.3457% ( 3) 00:22:15.130 21595.672 - 21720.503: 54.3945% ( 3) 00:22:15.130 21720.503 - 21845.333: 54.4434% ( 3) 00:22:15.130 21845.333 - 21970.164: 54.4922% ( 3) 00:22:15.130 21970.164 - 22094.994: 54.5410% ( 3) 00:22:15.130 22094.994 - 22219.825: 54.5898% ( 3) 00:22:15.130 22219.825 - 22344.655: 54.6387% ( 3) 00:22:15.130 22344.655 - 22469.486: 54.7038% ( 4) 00:22:15.130 22469.486 - 22594.316: 54.8014% ( 6) 00:22:15.130 22594.316 - 22719.147: 54.9154% ( 7) 00:22:15.130 22719.147 - 22843.977: 55.0456% ( 8) 00:22:15.130 22843.977 - 22968.808: 55.1107% ( 4) 00:22:15.130 22968.808 - 23093.638: 55.1595% ( 3) 00:22:15.130 23093.638 - 23218.469: 55.1921% ( 2) 00:22:15.130 23218.469 - 23343.299: 55.2083% ( 1) 00:22:15.130 23967.451 - 24092.282: 55.2409% ( 2) 00:22:15.130 24092.282 - 24217.112: 55.3060% ( 4) 00:22:15.130 24217.112 - 24341.943: 55.4036% ( 6) 00:22:15.130 24341.943 - 24466.773: 55.4688% ( 4) 00:22:15.130 24466.773 - 24591.604: 55.6152% ( 9) 00:22:15.130 24591.604 - 24716.434: 55.7292% ( 7) 00:22:15.130 24716.434 - 24841.265: 55.7617% ( 2) 00:22:15.130 24841.265 - 24966.095: 55.7943% ( 2) 00:22:15.130 24966.095 - 25090.926: 55.8431% ( 3) 00:22:15.130 25090.926 - 25215.756: 55.8757% ( 2) 00:22:15.130 25215.756 - 25340.587: 55.9082% ( 2) 00:22:15.130 25340.587 - 25465.417: 55.9408% ( 2) 00:22:15.130 25465.417 - 25590.248: 55.9896% ( 3) 00:22:15.130 25590.248 - 25715.078: 56.0221% ( 2) 00:22:15.130 25715.078 - 25839.909: 56.0710% ( 3) 00:22:15.130 25839.909 - 25964.739: 56.1035% ( 2) 00:22:15.130 25964.739 - 26089.570: 56.1361% ( 2) 00:22:15.130 26089.570 - 26214.400: 56.1849% ( 3) 00:22:15.130 26214.400 - 26339.230: 56.2174% ( 2) 00:22:15.130 26339.230 - 26464.061: 56.2500% ( 2) 00:22:15.130 26963.383 - 27088.213: 56.2663% ( 1) 00:22:15.130 27213.044 - 27337.874: 56.3151% ( 3) 00:22:15.130 27337.874 - 27462.705: 56.3314% ( 1) 00:22:15.130 27462.705 - 27587.535: 56.3477% ( 1) 00:22:15.130 27587.535 - 27712.366: 56.3639% ( 1) 00:22:15.130 27712.366 - 27837.196: 56.3802% ( 1) 00:22:15.130 28086.857 - 28211.688: 56.3965% ( 1) 00:22:15.130 28336.518 - 28461.349: 56.4290% ( 2) 00:22:15.130 28461.349 - 28586.179: 56.5430% ( 7) 00:22:15.130 28586.179 - 28711.010: 56.8522% ( 19) 00:22:15.130 28711.010 - 28835.840: 57.2266% ( 23) 00:22:15.130 28835.840 - 28960.670: 57.7474% ( 32) 00:22:15.130 28960.670 - 29085.501: 58.3496% ( 37) 00:22:15.130 29085.501 - 29210.331: 59.1634% ( 50) 00:22:15.130 29210.331 - 29335.162: 60.2539% ( 67) 00:22:15.130 29335.162 - 29459.992: 62.2559% ( 123) 00:22:15.130 29459.992 - 29584.823: 65.5111% ( 200) 00:22:15.130 29584.823 - 29709.653: 67.2852% ( 109) 00:22:15.130 29709.653 - 29834.484: 69.4824% ( 135) 00:22:15.130 29834.484 - 29959.314: 71.2728% ( 110) 00:22:15.130 29959.314 - 30084.145: 73.3398% ( 127) 00:22:15.130 30084.145 - 30208.975: 74.9512% ( 99) 00:22:15.130 30208.975 - 30333.806: 77.7181% ( 170) 00:22:15.130 30333.806 - 30458.636: 81.6243% ( 240) 00:22:15.130 30458.636 - 30583.467: 84.6191% ( 184) 00:22:15.130 30583.467 - 30708.297: 86.0514% ( 88) 00:22:15.130 30708.297 - 30833.128: 87.0117% ( 59) 00:22:15.130 30833.128 - 30957.958: 87.8743% ( 53) 00:22:15.130 30957.958 - 31082.789: 88.7858% ( 56) 00:22:15.130 31082.789 - 31207.619: 89.9740% ( 73) 00:22:15.130 31207.619 - 31332.450: 90.9831% ( 62) 00:22:15.130 31332.450 - 31457.280: 92.1712% ( 73) 00:22:15.130 31457.280 - 31582.110: 93.0664% ( 55) 00:22:15.130 31582.110 - 31706.941: 93.9128% ( 52) 00:22:15.130 31706.941 - 31831.771: 94.6940% ( 48) 00:22:15.130 31831.771 - 31956.602: 95.5404% ( 52) 00:22:15.130 31956.602 - 32206.263: 96.3704% ( 51) 00:22:15.130 32206.263 - 32455.924: 96.7936% ( 26) 00:22:15.130 32455.924 - 32705.585: 97.1354% ( 21) 00:22:15.130 32705.585 - 32955.246: 97.4121% ( 17) 00:22:15.130 32955.246 - 33204.907: 97.6400% ( 14) 00:22:15.130 33204.907 - 33454.568: 97.8190% ( 11) 00:22:15.130 33454.568 - 33704.229: 97.9167% ( 6) 00:22:15.130 44189.989 - 44439.650: 97.9329% ( 1) 00:22:15.130 44439.650 - 44689.310: 98.0306% ( 6) 00:22:15.130 44689.310 - 44938.971: 98.1120% ( 5) 00:22:15.130 44938.971 - 45188.632: 98.1934% ( 5) 00:22:15.130 45188.632 - 45438.293: 98.2910% ( 6) 00:22:15.130 45438.293 - 45687.954: 98.3724% ( 5) 00:22:15.130 45687.954 - 45937.615: 98.4538% ( 5) 00:22:15.130 45937.615 - 46187.276: 98.5514% ( 6) 00:22:15.130 46187.276 - 46436.937: 98.6328% ( 5) 00:22:15.130 46436.937 - 46686.598: 98.7142% ( 5) 00:22:15.130 46686.598 - 46936.259: 98.7956% ( 5) 00:22:15.130 46936.259 - 47185.920: 98.8770% ( 5) 00:22:15.130 47185.920 - 47435.581: 98.9583% ( 5) 00:22:15.130 53427.444 - 53677.105: 98.9909% ( 2) 00:22:15.130 53677.105 - 53926.766: 99.0723% ( 5) 00:22:15.130 53926.766 - 54176.427: 99.1536% ( 5) 00:22:15.130 54176.427 - 54426.088: 99.2350% ( 5) 00:22:15.130 54426.088 - 54675.749: 99.3164% ( 5) 00:22:15.130 54675.749 - 54925.410: 99.4141% ( 6) 00:22:15.130 54925.410 - 55175.070: 99.4954% ( 5) 00:22:15.130 55175.070 - 55424.731: 99.5931% ( 6) 00:22:15.130 55424.731 - 55674.392: 99.6745% ( 5) 00:22:15.130 55674.392 - 55924.053: 99.7721% ( 6) 00:22:15.130 55924.053 - 56173.714: 99.8535% ( 5) 00:22:15.130 56173.714 - 56423.375: 99.9349% ( 5) 00:22:15.130 56423.375 - 56673.036: 100.0000% ( 4) 00:22:15.130 00:22:15.130 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:22:15.130 ============================================================================== 00:22:15.130 Range in us Cumulative IO count 00:22:15.131 9112.625 - 9175.040: 0.0163% ( 1) 00:22:15.131 9237.455 - 9299.870: 0.0326% ( 1) 00:22:15.131 9362.286 - 9424.701: 0.0814% ( 3) 00:22:15.131 9424.701 - 9487.116: 0.2116% ( 8) 00:22:15.131 9487.116 - 9549.531: 0.4720% ( 16) 00:22:15.131 9549.531 - 9611.947: 0.9115% ( 27) 00:22:15.131 9611.947 - 9674.362: 1.5788% ( 41) 00:22:15.131 9674.362 - 9736.777: 2.3438% ( 47) 00:22:15.131 9736.777 - 9799.192: 3.0924% ( 46) 00:22:15.131 9799.192 - 9861.608: 4.0690% ( 60) 00:22:15.131 9861.608 - 9924.023: 4.9642% ( 55) 00:22:15.131 9924.023 - 9986.438: 6.1198% ( 71) 00:22:15.131 9986.438 - 10048.853: 7.0638% ( 58) 00:22:15.131 10048.853 - 10111.269: 7.8613% ( 49) 00:22:15.131 10111.269 - 10173.684: 9.4564% ( 98) 00:22:15.131 10173.684 - 10236.099: 10.4980% ( 64) 00:22:15.131 10236.099 - 10298.514: 11.6699% ( 72) 00:22:15.131 10298.514 - 10360.930: 12.9069% ( 76) 00:22:15.131 10360.930 - 10423.345: 14.0137% ( 68) 00:22:15.131 10423.345 - 10485.760: 14.9414% ( 57) 00:22:15.131 10485.760 - 10548.175: 15.9017% ( 59) 00:22:15.131 10548.175 - 10610.590: 16.8132% ( 56) 00:22:15.131 10610.590 - 10673.006: 17.8060% ( 61) 00:22:15.131 10673.006 - 10735.421: 18.6198% ( 50) 00:22:15.131 10735.421 - 10797.836: 18.9616% ( 21) 00:22:15.131 10797.836 - 10860.251: 19.4010% ( 27) 00:22:15.131 10860.251 - 10922.667: 19.7266% ( 20) 00:22:15.131 10922.667 - 10985.082: 20.0684% ( 21) 00:22:15.131 10985.082 - 11047.497: 20.3939% ( 20) 00:22:15.131 11047.497 - 11109.912: 20.6868% ( 18) 00:22:15.131 11109.912 - 11172.328: 21.3867% ( 43) 00:22:15.131 11172.328 - 11234.743: 21.9238% ( 33) 00:22:15.131 11234.743 - 11297.158: 22.3633% ( 27) 00:22:15.131 11297.158 - 11359.573: 22.6725% ( 19) 00:22:15.131 11359.573 - 11421.989: 23.0794% ( 25) 00:22:15.131 11421.989 - 11484.404: 23.2422% ( 10) 00:22:15.131 11484.404 - 11546.819: 23.4049% ( 10) 00:22:15.131 11546.819 - 11609.234: 23.5189% ( 7) 00:22:15.131 11609.234 - 11671.650: 23.6654% ( 9) 00:22:15.131 11671.650 - 11734.065: 23.7956% ( 8) 00:22:15.131 11734.065 - 11796.480: 24.0885% ( 18) 00:22:15.131 11796.480 - 11858.895: 24.5117% ( 26) 00:22:15.131 11858.895 - 11921.310: 25.0000% ( 30) 00:22:15.131 11921.310 - 11983.726: 25.7161% ( 44) 00:22:15.131 11983.726 - 12046.141: 26.5137% ( 49) 00:22:15.131 12046.141 - 12108.556: 27.3763% ( 53) 00:22:15.131 12108.556 - 12170.971: 28.0924% ( 44) 00:22:15.131 12170.971 - 12233.387: 28.8411% ( 46) 00:22:15.131 12233.387 - 12295.802: 29.4596% ( 38) 00:22:15.131 12295.802 - 12358.217: 30.8105% ( 83) 00:22:15.131 12358.217 - 12420.632: 31.2826% ( 29) 00:22:15.131 12420.632 - 12483.048: 31.6569% ( 23) 00:22:15.131 12483.048 - 12545.463: 32.1615% ( 31) 00:22:15.131 12545.463 - 12607.878: 32.4382% ( 17) 00:22:15.131 12607.878 - 12670.293: 32.6823% ( 15) 00:22:15.131 12670.293 - 12732.709: 32.8939% ( 13) 00:22:15.131 12732.709 - 12795.124: 33.0566% ( 10) 00:22:15.131 12795.124 - 12857.539: 33.2194% ( 10) 00:22:15.131 12857.539 - 12919.954: 33.3496% ( 8) 00:22:15.131 12919.954 - 12982.370: 33.5124% ( 10) 00:22:15.131 12982.370 - 13044.785: 33.6751% ( 10) 00:22:15.131 13044.785 - 13107.200: 33.9681% ( 18) 00:22:15.131 13107.200 - 13169.615: 34.4564% ( 30) 00:22:15.131 13169.615 - 13232.030: 34.7168% ( 16) 00:22:15.131 13232.030 - 13294.446: 35.0423% ( 20) 00:22:15.131 13294.446 - 13356.861: 35.3353% ( 18) 00:22:15.131 13356.861 - 13419.276: 35.6771% ( 21) 00:22:15.131 13419.276 - 13481.691: 35.9538% ( 17) 00:22:15.131 13481.691 - 13544.107: 36.2467% ( 18) 00:22:15.131 13544.107 - 13606.522: 36.5560% ( 19) 00:22:15.131 13606.522 - 13668.937: 36.9629% ( 25) 00:22:15.131 13668.937 - 13731.352: 37.5488% ( 36) 00:22:15.131 13731.352 - 13793.768: 38.3626% ( 50) 00:22:15.131 13793.768 - 13856.183: 39.6647% ( 80) 00:22:15.131 13856.183 - 13918.598: 40.3971% ( 45) 00:22:15.131 13918.598 - 13981.013: 41.0319% ( 39) 00:22:15.131 13981.013 - 14043.429: 41.5853% ( 34) 00:22:15.131 14043.429 - 14105.844: 42.2363% ( 40) 00:22:15.131 14105.844 - 14168.259: 42.8548% ( 38) 00:22:15.131 14168.259 - 14230.674: 43.5059% ( 40) 00:22:15.131 14230.674 - 14293.090: 44.3848% ( 54) 00:22:15.131 14293.090 - 14355.505: 45.7520% ( 84) 00:22:15.131 14355.505 - 14417.920: 46.5983% ( 52) 00:22:15.131 14417.920 - 14480.335: 47.0866% ( 30) 00:22:15.131 14480.335 - 14542.750: 47.4121% ( 20) 00:22:15.131 14542.750 - 14605.166: 47.8353% ( 26) 00:22:15.131 14605.166 - 14667.581: 48.1283% ( 18) 00:22:15.131 14667.581 - 14729.996: 48.3887% ( 16) 00:22:15.131 14729.996 - 14792.411: 48.6165% ( 14) 00:22:15.131 14792.411 - 14854.827: 48.8118% ( 12) 00:22:15.131 14854.827 - 14917.242: 48.9583% ( 9) 00:22:15.131 14917.242 - 14979.657: 49.1374% ( 11) 00:22:15.131 14979.657 - 15042.072: 49.2676% ( 8) 00:22:15.131 15042.072 - 15104.488: 49.4303% ( 10) 00:22:15.131 15104.488 - 15166.903: 49.5605% ( 8) 00:22:15.131 15166.903 - 15229.318: 49.7884% ( 14) 00:22:15.131 15229.318 - 15291.733: 50.2116% ( 26) 00:22:15.131 15291.733 - 15354.149: 50.6510% ( 27) 00:22:15.131 15354.149 - 15416.564: 51.0905% ( 27) 00:22:15.131 15416.564 - 15478.979: 51.3997% ( 19) 00:22:15.131 15478.979 - 15541.394: 51.5788% ( 11) 00:22:15.131 15541.394 - 15603.810: 51.7090% ( 8) 00:22:15.131 15603.810 - 15666.225: 51.8555% ( 9) 00:22:15.131 15666.225 - 15728.640: 52.0508% ( 12) 00:22:15.131 15728.640 - 15791.055: 52.2949% ( 15) 00:22:15.131 15791.055 - 15853.470: 52.4740% ( 11) 00:22:15.131 15853.470 - 15915.886: 52.7018% ( 14) 00:22:15.131 15915.886 - 15978.301: 52.8971% ( 12) 00:22:15.131 15978.301 - 16103.131: 53.3040% ( 25) 00:22:15.131 16103.131 - 16227.962: 53.7272% ( 26) 00:22:15.131 16227.962 - 16352.792: 53.8249% ( 6) 00:22:15.131 16352.792 - 16477.623: 53.8737% ( 3) 00:22:15.131 16477.623 - 16602.453: 53.9388% ( 4) 00:22:15.131 16602.453 - 16727.284: 54.0039% ( 4) 00:22:15.131 16727.284 - 16852.114: 54.1016% ( 6) 00:22:15.131 16852.114 - 16976.945: 54.1667% ( 4) 00:22:15.131 21595.672 - 21720.503: 54.1829% ( 1) 00:22:15.131 21720.503 - 21845.333: 54.1992% ( 1) 00:22:15.131 21970.164 - 22094.994: 54.2480% ( 3) 00:22:15.131 22094.994 - 22219.825: 54.2969% ( 3) 00:22:15.131 22219.825 - 22344.655: 54.3294% ( 2) 00:22:15.131 22344.655 - 22469.486: 54.3783% ( 3) 00:22:15.131 22469.486 - 22594.316: 54.4108% ( 2) 00:22:15.131 22594.316 - 22719.147: 54.4434% ( 2) 00:22:15.131 22719.147 - 22843.977: 54.4759% ( 2) 00:22:15.131 22843.977 - 22968.808: 54.5410% ( 4) 00:22:15.131 22968.808 - 23093.638: 54.6712% ( 8) 00:22:15.131 23093.638 - 23218.469: 54.8014% ( 8) 00:22:15.131 23218.469 - 23343.299: 55.0130% ( 13) 00:22:15.131 23343.299 - 23468.130: 55.2246% ( 13) 00:22:15.131 23468.130 - 23592.960: 55.4036% ( 11) 00:22:15.131 23592.960 - 23717.790: 55.5501% ( 9) 00:22:15.131 23717.790 - 23842.621: 55.7617% ( 13) 00:22:15.131 23842.621 - 23967.451: 55.8431% ( 5) 00:22:15.131 23967.451 - 24092.282: 55.9082% ( 4) 00:22:15.131 24092.282 - 24217.112: 55.9570% ( 3) 00:22:15.131 24217.112 - 24341.943: 56.0059% ( 3) 00:22:15.131 24341.943 - 24466.773: 56.0547% ( 3) 00:22:15.131 24466.773 - 24591.604: 56.0872% ( 2) 00:22:15.131 24591.604 - 24716.434: 56.1198% ( 2) 00:22:15.131 24716.434 - 24841.265: 56.1523% ( 2) 00:22:15.131 24841.265 - 24966.095: 56.1849% ( 2) 00:22:15.131 24966.095 - 25090.926: 56.2337% ( 3) 00:22:15.131 25090.926 - 25215.756: 56.2500% ( 1) 00:22:15.131 27337.874 - 27462.705: 56.2663% ( 1) 00:22:15.131 27462.705 - 27587.535: 56.2826% ( 1) 00:22:15.131 27712.366 - 27837.196: 56.2988% ( 1) 00:22:15.131 27962.027 - 28086.857: 56.3477% ( 3) 00:22:15.131 28086.857 - 28211.688: 56.5104% ( 10) 00:22:15.131 28211.688 - 28336.518: 56.7220% ( 13) 00:22:15.131 28336.518 - 28461.349: 56.9173% ( 12) 00:22:15.131 28461.349 - 28586.179: 57.1777% ( 16) 00:22:15.131 28586.179 - 28711.010: 57.5195% ( 21) 00:22:15.131 28711.010 - 28835.840: 58.0404% ( 32) 00:22:15.131 28835.840 - 28960.670: 58.5124% ( 29) 00:22:15.131 28960.670 - 29085.501: 59.1634% ( 40) 00:22:15.131 29085.501 - 29210.331: 59.9121% ( 46) 00:22:15.131 29210.331 - 29335.162: 61.2467% ( 82) 00:22:15.131 29335.162 - 29459.992: 62.9232% ( 103) 00:22:15.131 29459.992 - 29584.823: 64.6484% ( 106) 00:22:15.131 29584.823 - 29709.653: 66.7969% ( 132) 00:22:15.131 29709.653 - 29834.484: 69.1406% ( 144) 00:22:15.131 29834.484 - 29959.314: 72.1029% ( 182) 00:22:15.131 29959.314 - 30084.145: 73.5514% ( 89) 00:22:15.131 30084.145 - 30208.975: 75.0651% ( 93) 00:22:15.131 30208.975 - 30333.806: 77.8809% ( 173) 00:22:15.131 30333.806 - 30458.636: 81.7383% ( 237) 00:22:15.131 30458.636 - 30583.467: 85.3027% ( 219) 00:22:15.131 30583.467 - 30708.297: 86.4583% ( 71) 00:22:15.131 30708.297 - 30833.128: 87.4023% ( 58) 00:22:15.131 30833.128 - 30957.958: 88.3789% ( 60) 00:22:15.131 30957.958 - 31082.789: 89.2253% ( 52) 00:22:15.131 31082.789 - 31207.619: 90.2507% ( 63) 00:22:15.131 31207.619 - 31332.450: 91.3249% ( 66) 00:22:15.131 31332.450 - 31457.280: 91.9271% ( 37) 00:22:15.131 31457.280 - 31582.110: 92.7083% ( 48) 00:22:15.131 31582.110 - 31706.941: 93.4733% ( 47) 00:22:15.131 31706.941 - 31831.771: 94.2220% ( 46) 00:22:15.131 31831.771 - 31956.602: 94.7428% ( 32) 00:22:15.131 31956.602 - 32206.263: 95.8008% ( 65) 00:22:15.131 32206.263 - 32455.924: 96.2891% ( 30) 00:22:15.131 32455.924 - 32705.585: 96.6634% ( 23) 00:22:15.131 32705.585 - 32955.246: 96.9889% ( 20) 00:22:15.131 32955.246 - 33204.907: 97.2656% ( 17) 00:22:15.131 33204.907 - 33454.568: 97.5098% ( 15) 00:22:15.131 33454.568 - 33704.229: 97.7539% ( 15) 00:22:15.131 33704.229 - 33953.890: 97.9167% ( 10) 00:22:15.131 41693.379 - 41943.040: 97.9329% ( 1) 00:22:15.131 41943.040 - 42192.701: 98.0143% ( 5) 00:22:15.131 42192.701 - 42442.362: 98.1120% ( 6) 00:22:15.131 42442.362 - 42692.023: 98.1934% ( 5) 00:22:15.131 42692.023 - 42941.684: 98.2910% ( 6) 00:22:15.131 42941.684 - 43191.345: 98.3724% ( 5) 00:22:15.131 43191.345 - 43441.006: 98.4701% ( 6) 00:22:15.131 43441.006 - 43690.667: 98.5514% ( 5) 00:22:15.131 43690.667 - 43940.328: 98.6328% ( 5) 00:22:15.131 43940.328 - 44189.989: 98.7142% ( 5) 00:22:15.131 44189.989 - 44439.650: 98.7956% ( 5) 00:22:15.131 44439.650 - 44689.310: 98.8770% ( 5) 00:22:15.131 44689.310 - 44938.971: 98.9583% ( 5) 00:22:15.131 50681.173 - 50930.834: 98.9746% ( 1) 00:22:15.131 50930.834 - 51180.495: 99.0560% ( 5) 00:22:15.132 51180.495 - 51430.156: 99.1536% ( 6) 00:22:15.132 51430.156 - 51679.817: 99.2188% ( 4) 00:22:15.132 51679.817 - 51929.478: 99.3001% ( 5) 00:22:15.132 51929.478 - 52179.139: 99.3978% ( 6) 00:22:15.132 52179.139 - 52428.800: 99.4792% ( 5) 00:22:15.132 52428.800 - 52678.461: 99.5768% ( 6) 00:22:15.132 52678.461 - 52928.122: 99.6582% ( 5) 00:22:15.132 52928.122 - 53177.783: 99.7396% ( 5) 00:22:15.132 53177.783 - 53427.444: 99.8372% ( 6) 00:22:15.132 53427.444 - 53677.105: 99.9186% ( 5) 00:22:15.132 53677.105 - 53926.766: 100.0000% ( 5) 00:22:15.132 00:22:15.132 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:22:15.132 ============================================================================== 00:22:15.132 Range in us Cumulative IO count 00:22:15.132 9299.870 - 9362.286: 0.1139% ( 7) 00:22:15.132 9362.286 - 9424.701: 0.2116% ( 6) 00:22:15.132 9424.701 - 9487.116: 0.4395% ( 14) 00:22:15.132 9487.116 - 9549.531: 0.7324% ( 18) 00:22:15.132 9549.531 - 9611.947: 1.1393% ( 25) 00:22:15.132 9611.947 - 9674.362: 1.6927% ( 34) 00:22:15.132 9674.362 - 9736.777: 2.2949% ( 37) 00:22:15.132 9736.777 - 9799.192: 2.8971% ( 37) 00:22:15.132 9799.192 - 9861.608: 3.6458% ( 46) 00:22:15.132 9861.608 - 9924.023: 4.8991% ( 77) 00:22:15.132 9924.023 - 9986.438: 5.9733% ( 66) 00:22:15.132 9986.438 - 10048.853: 6.9499% ( 60) 00:22:15.132 10048.853 - 10111.269: 7.7799% ( 51) 00:22:15.132 10111.269 - 10173.684: 8.6914% ( 56) 00:22:15.132 10173.684 - 10236.099: 9.8145% ( 69) 00:22:15.132 10236.099 - 10298.514: 11.0514% ( 76) 00:22:15.132 10298.514 - 10360.930: 12.8906% ( 113) 00:22:15.132 10360.930 - 10423.345: 13.7207% ( 51) 00:22:15.132 10423.345 - 10485.760: 14.9251% ( 74) 00:22:15.132 10485.760 - 10548.175: 16.2435% ( 81) 00:22:15.132 10548.175 - 10610.590: 17.0247% ( 48) 00:22:15.132 10610.590 - 10673.006: 17.5781% ( 34) 00:22:15.132 10673.006 - 10735.421: 18.2617% ( 42) 00:22:15.132 10735.421 - 10797.836: 18.7663% ( 31) 00:22:15.132 10797.836 - 10860.251: 19.1569% ( 24) 00:22:15.132 10860.251 - 10922.667: 19.5964% ( 27) 00:22:15.132 10922.667 - 10985.082: 19.9870% ( 24) 00:22:15.132 10985.082 - 11047.497: 20.3776% ( 24) 00:22:15.132 11047.497 - 11109.912: 20.8333% ( 28) 00:22:15.132 11109.912 - 11172.328: 21.5495% ( 44) 00:22:15.132 11172.328 - 11234.743: 22.1191% ( 35) 00:22:15.132 11234.743 - 11297.158: 22.6888% ( 35) 00:22:15.132 11297.158 - 11359.573: 22.9655% ( 17) 00:22:15.132 11359.573 - 11421.989: 23.2259% ( 16) 00:22:15.132 11421.989 - 11484.404: 23.5677% ( 21) 00:22:15.132 11484.404 - 11546.819: 23.8281% ( 16) 00:22:15.132 11546.819 - 11609.234: 24.0885% ( 16) 00:22:15.132 11609.234 - 11671.650: 24.2676% ( 11) 00:22:15.132 11671.650 - 11734.065: 24.5768% ( 19) 00:22:15.132 11734.065 - 11796.480: 24.8698% ( 18) 00:22:15.132 11796.480 - 11858.895: 25.2279% ( 22) 00:22:15.132 11858.895 - 11921.310: 26.0254% ( 49) 00:22:15.132 11921.310 - 11983.726: 26.7578% ( 45) 00:22:15.132 11983.726 - 12046.141: 27.1647% ( 25) 00:22:15.132 12046.141 - 12108.556: 27.4577% ( 18) 00:22:15.132 12108.556 - 12170.971: 27.7832% ( 20) 00:22:15.132 12170.971 - 12233.387: 28.5156% ( 45) 00:22:15.132 12233.387 - 12295.802: 29.1829% ( 41) 00:22:15.132 12295.802 - 12358.217: 29.5247% ( 21) 00:22:15.132 12358.217 - 12420.632: 29.9967% ( 29) 00:22:15.132 12420.632 - 12483.048: 30.7617% ( 47) 00:22:15.132 12483.048 - 12545.463: 31.2988% ( 33) 00:22:15.132 12545.463 - 12607.878: 32.3730% ( 66) 00:22:15.132 12607.878 - 12670.293: 32.7311% ( 22) 00:22:15.132 12670.293 - 12732.709: 33.3333% ( 37) 00:22:15.132 12732.709 - 12795.124: 33.8216% ( 30) 00:22:15.132 12795.124 - 12857.539: 34.0495% ( 14) 00:22:15.132 12857.539 - 12919.954: 34.2773% ( 14) 00:22:15.132 12919.954 - 12982.370: 34.4727% ( 12) 00:22:15.132 12982.370 - 13044.785: 34.6517% ( 11) 00:22:15.132 13044.785 - 13107.200: 34.7819% ( 8) 00:22:15.132 13107.200 - 13169.615: 34.9609% ( 11) 00:22:15.132 13169.615 - 13232.030: 35.1888% ( 14) 00:22:15.132 13232.030 - 13294.446: 35.3516% ( 10) 00:22:15.132 13294.446 - 13356.861: 35.8724% ( 32) 00:22:15.132 13356.861 - 13419.276: 36.0677% ( 12) 00:22:15.132 13419.276 - 13481.691: 36.2467% ( 11) 00:22:15.132 13481.691 - 13544.107: 36.4583% ( 13) 00:22:15.132 13544.107 - 13606.522: 36.8490% ( 24) 00:22:15.132 13606.522 - 13668.937: 37.4512% ( 37) 00:22:15.132 13668.937 - 13731.352: 38.5254% ( 66) 00:22:15.132 13731.352 - 13793.768: 39.1276% ( 37) 00:22:15.132 13793.768 - 13856.183: 39.5833% ( 28) 00:22:15.132 13856.183 - 13918.598: 40.2832% ( 43) 00:22:15.132 13918.598 - 13981.013: 40.9017% ( 38) 00:22:15.132 13981.013 - 14043.429: 41.4876% ( 36) 00:22:15.132 14043.429 - 14105.844: 42.0898% ( 37) 00:22:15.132 14105.844 - 14168.259: 42.8874% ( 49) 00:22:15.132 14168.259 - 14230.674: 43.6849% ( 49) 00:22:15.132 14230.674 - 14293.090: 44.1895% ( 31) 00:22:15.132 14293.090 - 14355.505: 44.7266% ( 33) 00:22:15.132 14355.505 - 14417.920: 45.2148% ( 30) 00:22:15.132 14417.920 - 14480.335: 45.6706% ( 28) 00:22:15.132 14480.335 - 14542.750: 46.4193% ( 46) 00:22:15.132 14542.750 - 14605.166: 46.9564% ( 33) 00:22:15.132 14605.166 - 14667.581: 47.2982% ( 21) 00:22:15.132 14667.581 - 14729.996: 47.5911% ( 18) 00:22:15.132 14729.996 - 14792.411: 47.7865% ( 12) 00:22:15.132 14792.411 - 14854.827: 48.0143% ( 14) 00:22:15.132 14854.827 - 14917.242: 48.2422% ( 14) 00:22:15.132 14917.242 - 14979.657: 48.4701% ( 14) 00:22:15.132 14979.657 - 15042.072: 48.6654% ( 12) 00:22:15.132 15042.072 - 15104.488: 48.8770% ( 13) 00:22:15.132 15104.488 - 15166.903: 49.1048% ( 14) 00:22:15.132 15166.903 - 15229.318: 49.3490% ( 15) 00:22:15.132 15229.318 - 15291.733: 49.6257% ( 17) 00:22:15.132 15291.733 - 15354.149: 50.0814% ( 28) 00:22:15.132 15354.149 - 15416.564: 50.5208% ( 27) 00:22:15.132 15416.564 - 15478.979: 50.8301% ( 19) 00:22:15.132 15478.979 - 15541.394: 50.9928% ( 10) 00:22:15.132 15541.394 - 15603.810: 51.1719% ( 11) 00:22:15.132 15603.810 - 15666.225: 51.3672% ( 12) 00:22:15.132 15666.225 - 15728.640: 51.5951% ( 14) 00:22:15.132 15728.640 - 15791.055: 51.7904% ( 12) 00:22:15.132 15791.055 - 15853.470: 52.0020% ( 13) 00:22:15.132 15853.470 - 15915.886: 52.2624% ( 16) 00:22:15.132 15915.886 - 15978.301: 52.5065% ( 15) 00:22:15.132 15978.301 - 16103.131: 52.9622% ( 28) 00:22:15.132 16103.131 - 16227.962: 53.2389% ( 17) 00:22:15.132 16227.962 - 16352.792: 53.4505% ( 13) 00:22:15.132 16352.792 - 16477.623: 53.6133% ( 10) 00:22:15.132 16477.623 - 16602.453: 53.7923% ( 11) 00:22:15.132 16602.453 - 16727.284: 54.0202% ( 14) 00:22:15.132 16727.284 - 16852.114: 54.1178% ( 6) 00:22:15.132 16852.114 - 16976.945: 54.1667% ( 3) 00:22:15.132 20971.520 - 21096.350: 54.1992% ( 2) 00:22:15.132 21096.350 - 21221.181: 54.2480% ( 3) 00:22:15.132 21221.181 - 21346.011: 54.3294% ( 5) 00:22:15.132 21346.011 - 21470.842: 54.3945% ( 4) 00:22:15.132 21470.842 - 21595.672: 54.4434% ( 3) 00:22:15.132 21595.672 - 21720.503: 54.6875% ( 15) 00:22:15.132 21720.503 - 21845.333: 54.7526% ( 4) 00:22:15.132 21845.333 - 21970.164: 54.7852% ( 2) 00:22:15.132 21970.164 - 22094.994: 54.8340% ( 3) 00:22:15.132 22094.994 - 22219.825: 54.8665% ( 2) 00:22:15.132 22219.825 - 22344.655: 54.8991% ( 2) 00:22:15.132 22344.655 - 22469.486: 54.9479% ( 3) 00:22:15.132 22469.486 - 22594.316: 54.9805% ( 2) 00:22:15.132 22594.316 - 22719.147: 55.0130% ( 2) 00:22:15.132 22719.147 - 22843.977: 55.0456% ( 2) 00:22:15.132 22843.977 - 22968.808: 55.0781% ( 2) 00:22:15.132 22968.808 - 23093.638: 55.1107% ( 2) 00:22:15.132 23093.638 - 23218.469: 55.1595% ( 3) 00:22:15.132 23218.469 - 23343.299: 55.1921% ( 2) 00:22:15.132 23343.299 - 23468.130: 55.2083% ( 1) 00:22:15.132 23468.130 - 23592.960: 55.2409% ( 2) 00:22:15.132 23592.960 - 23717.790: 55.2734% ( 2) 00:22:15.132 23717.790 - 23842.621: 55.3223% ( 3) 00:22:15.132 23842.621 - 23967.451: 55.3711% ( 3) 00:22:15.132 23967.451 - 24092.282: 55.3874% ( 1) 00:22:15.132 24092.282 - 24217.112: 55.4525% ( 4) 00:22:15.132 24217.112 - 24341.943: 55.5013% ( 3) 00:22:15.132 24341.943 - 24466.773: 55.5501% ( 3) 00:22:15.132 24466.773 - 24591.604: 55.6152% ( 4) 00:22:15.132 24591.604 - 24716.434: 55.6641% ( 3) 00:22:15.132 24716.434 - 24841.265: 55.7292% ( 4) 00:22:15.132 24841.265 - 24966.095: 55.8268% ( 6) 00:22:15.132 24966.095 - 25090.926: 55.9408% ( 7) 00:22:15.132 25090.926 - 25215.756: 56.0384% ( 6) 00:22:15.132 25215.756 - 25340.587: 56.1198% ( 5) 00:22:15.132 25340.587 - 25465.417: 56.2012% ( 5) 00:22:15.132 25465.417 - 25590.248: 56.2500% ( 3) 00:22:15.132 27712.366 - 27837.196: 56.2663% ( 1) 00:22:15.132 27962.027 - 28086.857: 56.2826% ( 1) 00:22:15.132 28086.857 - 28211.688: 56.3151% ( 2) 00:22:15.132 28211.688 - 28336.518: 56.4128% ( 6) 00:22:15.132 28336.518 - 28461.349: 56.5592% ( 9) 00:22:15.133 28461.349 - 28586.179: 56.6569% ( 6) 00:22:15.133 28586.179 - 28711.010: 56.8685% ( 13) 00:22:15.133 28711.010 - 28835.840: 57.2103% ( 21) 00:22:15.133 28835.840 - 28960.670: 57.7799% ( 35) 00:22:15.133 28960.670 - 29085.501: 58.3171% ( 33) 00:22:15.133 29085.501 - 29210.331: 59.1797% ( 53) 00:22:15.133 29210.331 - 29335.162: 60.5306% ( 83) 00:22:15.133 29335.162 - 29459.992: 62.9069% ( 146) 00:22:15.133 29459.992 - 29584.823: 65.1042% ( 135) 00:22:15.133 29584.823 - 29709.653: 67.3991% ( 141) 00:22:15.133 29709.653 - 29834.484: 69.9382% ( 156) 00:22:15.133 29834.484 - 29959.314: 72.6237% ( 165) 00:22:15.133 29959.314 - 30084.145: 73.8932% ( 78) 00:22:15.133 30084.145 - 30208.975: 75.1628% ( 78) 00:22:15.133 30208.975 - 30333.806: 78.1413% ( 183) 00:22:15.133 30333.806 - 30458.636: 81.7871% ( 224) 00:22:15.133 30458.636 - 30583.467: 85.2051% ( 210) 00:22:15.133 30583.467 - 30708.297: 86.1003% ( 55) 00:22:15.133 30708.297 - 30833.128: 86.8815% ( 48) 00:22:15.133 30833.128 - 30957.958: 87.8092% ( 57) 00:22:15.133 30957.958 - 31082.789: 88.9486% ( 70) 00:22:15.133 31082.789 - 31207.619: 89.8763% ( 57) 00:22:15.133 31207.619 - 31332.450: 91.2109% ( 82) 00:22:15.133 31332.450 - 31457.280: 91.9434% ( 45) 00:22:15.133 31457.280 - 31582.110: 92.6758% ( 45) 00:22:15.133 31582.110 - 31706.941: 93.9616% ( 79) 00:22:15.133 31706.941 - 31831.771: 94.7591% ( 49) 00:22:15.133 31831.771 - 31956.602: 95.3451% ( 36) 00:22:15.133 31956.602 - 32206.263: 96.1589% ( 50) 00:22:15.133 32206.263 - 32455.924: 96.6146% ( 28) 00:22:15.133 32455.924 - 32705.585: 96.9889% ( 23) 00:22:15.133 32705.585 - 32955.246: 97.3470% ( 22) 00:22:15.133 32955.246 - 33204.907: 97.5911% ( 15) 00:22:15.133 33204.907 - 33454.568: 97.8027% ( 13) 00:22:15.133 33454.568 - 33704.229: 97.9167% ( 7) 00:22:15.133 37948.465 - 38198.126: 97.9329% ( 1) 00:22:15.133 38198.126 - 38447.787: 97.9980% ( 4) 00:22:15.133 38447.787 - 38697.448: 98.0632% ( 4) 00:22:15.133 38697.448 - 38947.109: 98.1120% ( 3) 00:22:15.133 38947.109 - 39196.770: 98.1771% ( 4) 00:22:15.133 39196.770 - 39446.430: 98.2422% ( 4) 00:22:15.133 39446.430 - 39696.091: 98.3073% ( 4) 00:22:15.133 39696.091 - 39945.752: 98.3887% ( 5) 00:22:15.133 39945.752 - 40195.413: 98.4863% ( 6) 00:22:15.133 40195.413 - 40445.074: 98.5677% ( 5) 00:22:15.133 40445.074 - 40694.735: 98.6491% ( 5) 00:22:15.133 40694.735 - 40944.396: 98.7305% ( 5) 00:22:15.133 40944.396 - 41194.057: 98.8118% ( 5) 00:22:15.133 41194.057 - 41443.718: 98.8932% ( 5) 00:22:15.133 41443.718 - 41693.379: 98.9583% ( 4) 00:22:15.133 47185.920 - 47435.581: 99.0397% ( 5) 00:22:15.133 47435.581 - 47685.242: 99.1211% ( 5) 00:22:15.133 47685.242 - 47934.903: 99.2188% ( 6) 00:22:15.133 47934.903 - 48184.564: 99.3164% ( 6) 00:22:15.133 48184.564 - 48434.225: 99.3978% ( 5) 00:22:15.133 48434.225 - 48683.886: 99.4954% ( 6) 00:22:15.133 48683.886 - 48933.547: 99.5605% ( 4) 00:22:15.133 48933.547 - 49183.208: 99.6419% ( 5) 00:22:15.133 49183.208 - 49432.869: 99.7396% ( 6) 00:22:15.133 49432.869 - 49682.530: 99.8372% ( 6) 00:22:15.133 49682.530 - 49932.190: 99.9186% ( 5) 00:22:15.133 49932.190 - 50181.851: 100.0000% ( 5) 00:22:15.133 00:22:15.133 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:22:15.133 ============================================================================== 00:22:15.133 Range in us Cumulative IO count 00:22:15.133 9112.625 - 9175.040: 0.0483% ( 3) 00:22:15.133 9175.040 - 9237.455: 0.1289% ( 5) 00:22:15.133 9237.455 - 9299.870: 0.1772% ( 3) 00:22:15.133 9299.870 - 9362.286: 0.2094% ( 2) 00:22:15.133 9362.286 - 9424.701: 0.3061% ( 6) 00:22:15.133 9424.701 - 9487.116: 0.3222% ( 1) 00:22:15.133 9487.116 - 9549.531: 0.4349% ( 7) 00:22:15.133 9549.531 - 9611.947: 0.6604% ( 14) 00:22:15.133 9611.947 - 9674.362: 0.9665% ( 19) 00:22:15.133 9674.362 - 9736.777: 1.5142% ( 34) 00:22:15.133 9736.777 - 9799.192: 2.1424% ( 39) 00:22:15.133 9799.192 - 9861.608: 2.8189% ( 42) 00:22:15.133 9861.608 - 9924.023: 3.6566% ( 52) 00:22:15.133 9924.023 - 9986.438: 4.9613% ( 81) 00:22:15.133 9986.438 - 10048.853: 6.2178% ( 78) 00:22:15.133 10048.853 - 10111.269: 7.1521% ( 58) 00:22:15.133 10111.269 - 10173.684: 8.5052% ( 84) 00:22:15.133 10173.684 - 10236.099: 9.7455% ( 77) 00:22:15.133 10236.099 - 10298.514: 10.7120% ( 60) 00:22:15.133 10298.514 - 10360.930: 12.6128% ( 118) 00:22:15.133 10360.930 - 10423.345: 13.5631% ( 59) 00:22:15.133 10423.345 - 10485.760: 14.8679% ( 81) 00:22:15.133 10485.760 - 10548.175: 16.2854% ( 88) 00:22:15.133 10548.175 - 10610.590: 17.6707% ( 86) 00:22:15.133 10610.590 - 10673.006: 18.4439% ( 48) 00:22:15.133 10673.006 - 10735.421: 19.1205% ( 42) 00:22:15.133 10735.421 - 10797.836: 19.5393% ( 26) 00:22:15.133 10797.836 - 10860.251: 19.8776% ( 21) 00:22:15.133 10860.251 - 10922.667: 20.2159% ( 21) 00:22:15.133 10922.667 - 10985.082: 20.5380% ( 20) 00:22:15.133 10985.082 - 11047.497: 20.7957% ( 16) 00:22:15.133 11047.497 - 11109.912: 21.0052% ( 13) 00:22:15.133 11109.912 - 11172.328: 21.3918% ( 24) 00:22:15.133 11172.328 - 11234.743: 21.6495% ( 16) 00:22:15.133 11234.743 - 11297.158: 21.8750% ( 14) 00:22:15.133 11297.158 - 11359.573: 22.6321% ( 47) 00:22:15.133 11359.573 - 11421.989: 22.8737% ( 15) 00:22:15.133 11421.989 - 11484.404: 23.3409% ( 29) 00:22:15.133 11484.404 - 11546.819: 23.4697% ( 8) 00:22:15.133 11546.819 - 11609.234: 23.6630% ( 12) 00:22:15.133 11609.234 - 11671.650: 23.8563% ( 12) 00:22:15.133 11671.650 - 11734.065: 23.9852% ( 8) 00:22:15.133 11734.065 - 11796.480: 24.1785% ( 12) 00:22:15.133 11796.480 - 11858.895: 24.5651% ( 24) 00:22:15.133 11858.895 - 11921.310: 24.8872% ( 20) 00:22:15.133 11921.310 - 11983.726: 25.3544% ( 29) 00:22:15.133 11983.726 - 12046.141: 26.0309% ( 42) 00:22:15.133 12046.141 - 12108.556: 26.6591% ( 39) 00:22:15.133 12108.556 - 12170.971: 27.4323% ( 48) 00:22:15.133 12170.971 - 12233.387: 28.4149% ( 61) 00:22:15.133 12233.387 - 12295.802: 29.0110% ( 37) 00:22:15.133 12295.802 - 12358.217: 29.6875% ( 42) 00:22:15.133 12358.217 - 12420.632: 30.1707% ( 30) 00:22:15.133 12420.632 - 12483.048: 30.8312% ( 41) 00:22:15.133 12483.048 - 12545.463: 31.3144% ( 30) 00:22:15.133 12545.463 - 12607.878: 31.8460% ( 33) 00:22:15.133 12607.878 - 12670.293: 32.6997% ( 53) 00:22:15.133 12670.293 - 12732.709: 32.9897% ( 18) 00:22:15.133 12732.709 - 12795.124: 33.3280% ( 21) 00:22:15.133 12795.124 - 12857.539: 33.7468% ( 26) 00:22:15.133 12857.539 - 12919.954: 33.9884% ( 15) 00:22:15.133 12919.954 - 12982.370: 34.1012% ( 7) 00:22:15.133 12982.370 - 13044.785: 34.2139% ( 7) 00:22:15.133 13044.785 - 13107.200: 34.3750% ( 10) 00:22:15.133 13107.200 - 13169.615: 34.5844% ( 13) 00:22:15.133 13169.615 - 13232.030: 34.8421% ( 16) 00:22:15.133 13232.030 - 13294.446: 35.0193% ( 11) 00:22:15.133 13294.446 - 13356.861: 35.3576% ( 21) 00:22:15.133 13356.861 - 13419.276: 35.9053% ( 34) 00:22:15.133 13419.276 - 13481.691: 36.5174% ( 38) 00:22:15.133 13481.691 - 13544.107: 37.0006% ( 30) 00:22:15.133 13544.107 - 13606.522: 37.8705% ( 54) 00:22:15.133 13606.522 - 13668.937: 38.3376% ( 29) 00:22:15.133 13668.937 - 13731.352: 39.0947% ( 47) 00:22:15.133 13731.352 - 13793.768: 39.7713% ( 42) 00:22:15.133 13793.768 - 13856.183: 40.3351% ( 35) 00:22:15.133 13856.183 - 13918.598: 40.8827% ( 34) 00:22:15.133 13918.598 - 13981.013: 41.5915% ( 44) 00:22:15.133 13981.013 - 14043.429: 42.0586% ( 29) 00:22:15.133 14043.429 - 14105.844: 42.6869% ( 39) 00:22:15.133 14105.844 - 14168.259: 43.3795% ( 43) 00:22:15.133 14168.259 - 14230.674: 43.9594% ( 36) 00:22:15.133 14230.674 - 14293.090: 44.5232% ( 35) 00:22:15.133 14293.090 - 14355.505: 45.1997% ( 42) 00:22:15.133 14355.505 - 14417.920: 45.6830% ( 30) 00:22:15.133 14417.920 - 14480.335: 46.1340% ( 28) 00:22:15.133 14480.335 - 14542.750: 46.6012% ( 29) 00:22:15.133 14542.750 - 14605.166: 47.0200% ( 26) 00:22:15.133 14605.166 - 14667.581: 47.3905% ( 23) 00:22:15.133 14667.581 - 14729.996: 47.7610% ( 23) 00:22:15.133 14729.996 - 14792.411: 48.1798% ( 26) 00:22:15.133 14792.411 - 14854.827: 48.5825% ( 25) 00:22:15.133 14854.827 - 14917.242: 48.8885% ( 19) 00:22:15.133 14917.242 - 14979.657: 49.1785% ( 18) 00:22:15.133 14979.657 - 15042.072: 49.3718% ( 12) 00:22:15.133 15042.072 - 15104.488: 49.5490% ( 11) 00:22:15.133 15104.488 - 15166.903: 49.7745% ( 14) 00:22:15.133 15166.903 - 15229.318: 50.0483% ( 17) 00:22:15.133 15229.318 - 15291.733: 50.3222% ( 17) 00:22:15.133 15291.733 - 15354.149: 50.5960% ( 17) 00:22:15.133 15354.149 - 15416.564: 50.7571% ( 10) 00:22:15.133 15416.564 - 15478.979: 50.8860% ( 8) 00:22:15.133 15478.979 - 15541.394: 51.0631% ( 11) 00:22:15.133 15541.394 - 15603.810: 51.2403% ( 11) 00:22:15.133 15603.810 - 15666.225: 51.4981% ( 16) 00:22:15.133 15666.225 - 15728.640: 51.6753% ( 11) 00:22:15.133 15728.640 - 15791.055: 51.8847% ( 13) 00:22:15.133 15791.055 - 15853.470: 52.0780% ( 12) 00:22:15.133 15853.470 - 15915.886: 52.3035% ( 14) 00:22:15.133 15915.886 - 15978.301: 52.4807% ( 11) 00:22:15.133 15978.301 - 16103.131: 52.7223% ( 15) 00:22:15.133 16103.131 - 16227.962: 52.8995% ( 11) 00:22:15.133 16227.962 - 16352.792: 53.0445% ( 9) 00:22:15.133 16352.792 - 16477.623: 53.2700% ( 14) 00:22:15.133 16477.623 - 16602.453: 53.4794% ( 13) 00:22:15.133 16602.453 - 16727.284: 53.7049% ( 14) 00:22:15.133 16727.284 - 16852.114: 53.8177% ( 7) 00:22:15.133 16852.114 - 16976.945: 53.8660% ( 3) 00:22:15.133 16976.945 - 17101.775: 53.9143% ( 3) 00:22:15.133 17101.775 - 17226.606: 53.9626% ( 3) 00:22:15.133 17226.606 - 17351.436: 54.0110% ( 3) 00:22:15.133 17351.436 - 17476.267: 54.0915% ( 5) 00:22:15.133 17476.267 - 17601.097: 54.2204% ( 8) 00:22:15.133 17601.097 - 17725.928: 54.3170% ( 6) 00:22:15.133 17725.928 - 17850.758: 54.4298% ( 7) 00:22:15.133 17850.758 - 17975.589: 54.5264% ( 6) 00:22:15.133 17975.589 - 18100.419: 54.6070% ( 5) 00:22:15.133 18100.419 - 18225.250: 54.6392% ( 2) 00:22:15.133 19223.893 - 19348.724: 54.6714% ( 2) 00:22:15.133 19348.724 - 19473.554: 54.7519% ( 5) 00:22:15.133 19473.554 - 19598.385: 54.9291% ( 11) 00:22:15.133 19598.385 - 19723.215: 55.0741% ( 9) 00:22:15.133 19723.215 - 19848.046: 55.1063% ( 2) 00:22:15.133 19848.046 - 19972.876: 55.1546% ( 3) 00:22:15.133 19972.876 - 20097.707: 55.1869% ( 2) 00:22:15.133 20097.707 - 20222.537: 55.2191% ( 2) 00:22:15.133 20222.537 - 20347.368: 55.2674% ( 3) 00:22:15.134 20347.368 - 20472.198: 55.2996% ( 2) 00:22:15.134 20472.198 - 20597.029: 55.3479% ( 3) 00:22:15.134 20597.029 - 20721.859: 55.3963% ( 3) 00:22:15.134 20721.859 - 20846.690: 55.4285% ( 2) 00:22:15.134 20846.690 - 20971.520: 55.4607% ( 2) 00:22:15.134 20971.520 - 21096.350: 55.4929% ( 2) 00:22:15.134 21096.350 - 21221.181: 55.5412% ( 3) 00:22:15.134 21221.181 - 21346.011: 55.5896% ( 3) 00:22:15.134 21346.011 - 21470.842: 55.6218% ( 2) 00:22:15.134 21470.842 - 21595.672: 55.6701% ( 3) 00:22:15.134 25215.756 - 25340.587: 55.7184% ( 3) 00:22:15.134 25340.587 - 25465.417: 55.7668% ( 3) 00:22:15.134 25465.417 - 25590.248: 55.8151% ( 3) 00:22:15.134 25590.248 - 25715.078: 55.8634% ( 3) 00:22:15.134 25715.078 - 25839.909: 55.9117% ( 3) 00:22:15.134 25839.909 - 25964.739: 55.9601% ( 3) 00:22:15.134 25964.739 - 26089.570: 56.0084% ( 3) 00:22:15.134 26089.570 - 26214.400: 56.1050% ( 6) 00:22:15.134 26214.400 - 26339.230: 56.2339% ( 8) 00:22:15.134 26339.230 - 26464.061: 56.3466% ( 7) 00:22:15.134 26464.061 - 26588.891: 56.4111% ( 4) 00:22:15.134 26588.891 - 26713.722: 56.5077% ( 6) 00:22:15.134 26713.722 - 26838.552: 56.5722% ( 4) 00:22:15.134 26838.552 - 26963.383: 56.6205% ( 3) 00:22:15.134 26963.383 - 27088.213: 56.6527% ( 2) 00:22:15.134 27088.213 - 27213.044: 56.7171% ( 4) 00:22:15.134 27213.044 - 27337.874: 56.7655% ( 3) 00:22:15.134 27337.874 - 27462.705: 56.8138% ( 3) 00:22:15.134 27462.705 - 27587.535: 56.9104% ( 6) 00:22:15.134 27587.535 - 27712.366: 56.9427% ( 2) 00:22:15.134 27712.366 - 27837.196: 57.0071% ( 4) 00:22:15.134 27837.196 - 27962.027: 57.1037% ( 6) 00:22:15.134 27962.027 - 28086.857: 57.2487% ( 9) 00:22:15.134 28086.857 - 28211.688: 57.5064% ( 16) 00:22:15.134 28211.688 - 28336.518: 57.7642% ( 16) 00:22:15.134 28336.518 - 28461.349: 57.9897% ( 14) 00:22:15.134 28461.349 - 28586.179: 58.2957% ( 19) 00:22:15.134 28586.179 - 28711.010: 58.6501% ( 22) 00:22:15.134 28711.010 - 28835.840: 59.1656% ( 32) 00:22:15.134 28835.840 - 28960.670: 59.7616% ( 37) 00:22:15.134 28960.670 - 29085.501: 60.4381% ( 42) 00:22:15.134 29085.501 - 29210.331: 61.1308% ( 43) 00:22:15.134 29210.331 - 29335.162: 62.1134% ( 61) 00:22:15.134 29335.162 - 29459.992: 63.5631% ( 90) 00:22:15.134 29459.992 - 29584.823: 65.9794% ( 150) 00:22:15.134 29584.823 - 29709.653: 68.0412% ( 128) 00:22:15.134 29709.653 - 29834.484: 70.3769% ( 145) 00:22:15.134 29834.484 - 29959.314: 72.5999% ( 138) 00:22:15.134 29959.314 - 30084.145: 74.5006% ( 118) 00:22:15.134 30084.145 - 30208.975: 76.0954% ( 99) 00:22:15.134 30208.975 - 30333.806: 78.5438% ( 152) 00:22:15.134 30333.806 - 30458.636: 82.2165% ( 228) 00:22:15.134 30458.636 - 30583.467: 86.0180% ( 236) 00:22:15.134 30583.467 - 30708.297: 87.3550% ( 83) 00:22:15.134 30708.297 - 30833.128: 88.3054% ( 59) 00:22:15.134 30833.128 - 30957.958: 89.4652% ( 72) 00:22:15.134 30957.958 - 31082.789: 90.4317% ( 60) 00:22:15.134 31082.789 - 31207.619: 91.3015% ( 54) 00:22:15.134 31207.619 - 31332.450: 92.2841% ( 61) 00:22:15.134 31332.450 - 31457.280: 93.5889% ( 81) 00:22:15.134 31457.280 - 31582.110: 94.4910% ( 56) 00:22:15.134 31582.110 - 31706.941: 95.3447% ( 53) 00:22:15.134 31706.941 - 31831.771: 96.5206% ( 73) 00:22:15.134 31831.771 - 31956.602: 96.9716% ( 28) 00:22:15.134 31956.602 - 32206.263: 97.6160% ( 40) 00:22:15.134 32206.263 - 32455.924: 97.9543% ( 21) 00:22:15.134 32455.924 - 32705.585: 98.2925% ( 21) 00:22:15.134 32705.585 - 32955.246: 98.5503% ( 16) 00:22:15.134 32955.246 - 33204.907: 98.7113% ( 10) 00:22:15.134 33204.907 - 33454.568: 98.8724% ( 10) 00:22:15.134 33454.568 - 33704.229: 98.9691% ( 6) 00:22:15.134 36450.499 - 36700.160: 99.0174% ( 3) 00:22:15.134 36700.160 - 36949.821: 99.0979% ( 5) 00:22:15.134 36949.821 - 37199.482: 99.1785% ( 5) 00:22:15.134 37199.482 - 37449.143: 99.2429% ( 4) 00:22:15.134 37449.143 - 37698.804: 99.3396% ( 6) 00:22:15.134 37698.804 - 37948.465: 99.4201% ( 5) 00:22:15.134 37948.465 - 38198.126: 99.5006% ( 5) 00:22:15.134 38198.126 - 38447.787: 99.5812% ( 5) 00:22:15.134 38447.787 - 38697.448: 99.6778% ( 6) 00:22:15.134 38697.448 - 38947.109: 99.7584% ( 5) 00:22:15.134 38947.109 - 39196.770: 99.8228% ( 4) 00:22:15.134 39196.770 - 39446.430: 99.8872% ( 4) 00:22:15.134 39446.430 - 39696.091: 99.9517% ( 4) 00:22:15.134 39696.091 - 39945.752: 100.0000% ( 3) 00:22:15.134 00:22:15.134 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:22:15.134 ============================================================================== 00:22:15.134 Range in us Cumulative IO count 00:22:15.134 9424.701 - 9487.116: 0.0322% ( 2) 00:22:15.134 9487.116 - 9549.531: 0.1450% ( 7) 00:22:15.134 9549.531 - 9611.947: 0.2738% ( 8) 00:22:15.134 9611.947 - 9674.362: 0.4994% ( 14) 00:22:15.134 9674.362 - 9736.777: 0.9021% ( 25) 00:22:15.134 9736.777 - 9799.192: 1.8524% ( 59) 00:22:15.134 9799.192 - 9861.608: 2.4807% ( 39) 00:22:15.134 9861.608 - 9924.023: 3.4311% ( 59) 00:22:15.134 9924.023 - 9986.438: 4.8164% ( 86) 00:22:15.134 9986.438 - 10048.853: 6.6527% ( 114) 00:22:15.134 10048.853 - 10111.269: 8.2474% ( 99) 00:22:15.134 10111.269 - 10173.684: 10.2126% ( 122) 00:22:15.134 10173.684 - 10236.099: 11.5657% ( 84) 00:22:15.134 10236.099 - 10298.514: 12.8544% ( 80) 00:22:15.134 10298.514 - 10360.930: 13.9014% ( 65) 00:22:15.134 10360.930 - 10423.345: 15.4639% ( 97) 00:22:15.134 10423.345 - 10485.760: 16.5271% ( 66) 00:22:15.134 10485.760 - 10548.175: 17.0909% ( 35) 00:22:15.134 10548.175 - 10610.590: 17.6224% ( 33) 00:22:15.134 10610.590 - 10673.006: 18.0412% ( 26) 00:22:15.134 10673.006 - 10735.421: 18.3634% ( 20) 00:22:15.134 10735.421 - 10797.836: 18.7017% ( 21) 00:22:15.134 10797.836 - 10860.251: 19.3460% ( 40) 00:22:15.134 10860.251 - 10922.667: 19.6037% ( 16) 00:22:15.134 10922.667 - 10985.082: 19.8454% ( 15) 00:22:15.134 10985.082 - 11047.497: 20.3447% ( 31) 00:22:15.134 11047.497 - 11109.912: 20.7635% ( 26) 00:22:15.134 11109.912 - 11172.328: 20.9246% ( 10) 00:22:15.134 11172.328 - 11234.743: 21.3595% ( 27) 00:22:15.134 11234.743 - 11297.158: 22.3905% ( 64) 00:22:15.134 11297.158 - 11359.573: 22.6160% ( 14) 00:22:15.134 11359.573 - 11421.989: 22.8254% ( 13) 00:22:15.134 11421.989 - 11484.404: 23.0348% ( 13) 00:22:15.134 11484.404 - 11546.819: 23.7113% ( 42) 00:22:15.134 11546.819 - 11609.234: 24.0818% ( 23) 00:22:15.134 11609.234 - 11671.650: 24.2268% ( 9) 00:22:15.134 11671.650 - 11734.065: 24.3718% ( 9) 00:22:15.134 11734.065 - 11796.480: 24.6939% ( 20) 00:22:15.134 11796.480 - 11858.895: 25.0483% ( 22) 00:22:15.134 11858.895 - 11921.310: 25.7732% ( 45) 00:22:15.134 11921.310 - 11983.726: 26.1598% ( 24) 00:22:15.134 11983.726 - 12046.141: 26.9491% ( 49) 00:22:15.134 12046.141 - 12108.556: 27.5129% ( 35) 00:22:15.134 12108.556 - 12170.971: 27.8028% ( 18) 00:22:15.134 12170.971 - 12233.387: 28.0122% ( 13) 00:22:15.134 12233.387 - 12295.802: 29.2687% ( 78) 00:22:15.134 12295.802 - 12358.217: 29.5909% ( 20) 00:22:15.134 12358.217 - 12420.632: 29.8969% ( 19) 00:22:15.134 12420.632 - 12483.048: 30.1707% ( 17) 00:22:15.134 12483.048 - 12545.463: 30.6218% ( 28) 00:22:15.134 12545.463 - 12607.878: 31.5883% ( 60) 00:22:15.134 12607.878 - 12670.293: 31.8943% ( 19) 00:22:15.134 12670.293 - 12732.709: 32.1360% ( 15) 00:22:15.134 12732.709 - 12795.124: 32.3293% ( 12) 00:22:15.134 12795.124 - 12857.539: 32.9897% ( 41) 00:22:15.134 12857.539 - 12919.954: 33.2474% ( 16) 00:22:15.134 12919.954 - 12982.370: 33.4729% ( 14) 00:22:15.134 12982.370 - 13044.785: 33.6179% ( 9) 00:22:15.134 13044.785 - 13107.200: 33.8434% ( 14) 00:22:15.134 13107.200 - 13169.615: 34.0367% ( 12) 00:22:15.134 13169.615 - 13232.030: 34.2461% ( 13) 00:22:15.134 13232.030 - 13294.446: 34.6488% ( 25) 00:22:15.134 13294.446 - 13356.861: 35.1482% ( 31) 00:22:15.134 13356.861 - 13419.276: 35.6314% ( 30) 00:22:15.134 13419.276 - 13481.691: 36.0503% ( 26) 00:22:15.134 13481.691 - 13544.107: 36.5174% ( 29) 00:22:15.134 13544.107 - 13606.522: 37.0006% ( 30) 00:22:15.134 13606.522 - 13668.937: 37.9027% ( 56) 00:22:15.134 13668.937 - 13731.352: 38.4826% ( 36) 00:22:15.134 13731.352 - 13793.768: 39.0625% ( 36) 00:22:15.134 13793.768 - 13856.183: 39.6424% ( 36) 00:22:15.134 13856.183 - 13918.598: 40.2223% ( 36) 00:22:15.134 13918.598 - 13981.013: 40.8183% ( 37) 00:22:15.134 13981.013 - 14043.429: 41.5593% ( 46) 00:22:15.134 14043.429 - 14105.844: 42.4452% ( 55) 00:22:15.134 14105.844 - 14168.259: 43.0735% ( 39) 00:22:15.134 14168.259 - 14230.674: 43.7178% ( 40) 00:22:15.134 14230.674 - 14293.090: 44.1688% ( 28) 00:22:15.134 14293.090 - 14355.505: 44.7326% ( 35) 00:22:15.134 14355.505 - 14417.920: 45.2159% ( 30) 00:22:15.134 14417.920 - 14480.335: 45.6830% ( 29) 00:22:15.134 14480.335 - 14542.750: 46.2468% ( 35) 00:22:15.134 14542.750 - 14605.166: 46.7622% ( 32) 00:22:15.134 14605.166 - 14667.581: 47.1649% ( 25) 00:22:15.134 14667.581 - 14729.996: 47.5515% ( 24) 00:22:15.134 14729.996 - 14792.411: 47.9220% ( 23) 00:22:15.134 14792.411 - 14854.827: 48.2603% ( 21) 00:22:15.134 14854.827 - 14917.242: 48.5986% ( 21) 00:22:15.134 14917.242 - 14979.657: 49.0979% ( 31) 00:22:15.134 14979.657 - 15042.072: 49.3718% ( 17) 00:22:15.134 15042.072 - 15104.488: 49.5490% ( 11) 00:22:15.134 15104.488 - 15166.903: 49.6939% ( 9) 00:22:15.134 15166.903 - 15229.318: 49.9034% ( 13) 00:22:15.134 15229.318 - 15291.733: 50.1611% ( 16) 00:22:15.134 15291.733 - 15354.149: 50.4994% ( 21) 00:22:15.134 15354.149 - 15416.564: 50.7088% ( 13) 00:22:15.134 15416.564 - 15478.979: 50.8537% ( 9) 00:22:15.134 15478.979 - 15541.394: 51.0954% ( 15) 00:22:15.134 15541.394 - 15603.810: 51.3209% ( 14) 00:22:15.134 15603.810 - 15666.225: 51.5142% ( 12) 00:22:15.134 15666.225 - 15728.640: 51.8041% ( 18) 00:22:15.134 15728.640 - 15791.055: 51.9974% ( 12) 00:22:15.134 15791.055 - 15853.470: 52.1746% ( 11) 00:22:15.134 15853.470 - 15915.886: 52.2874% ( 7) 00:22:15.134 15915.886 - 15978.301: 52.3840% ( 6) 00:22:15.134 15978.301 - 16103.131: 52.6418% ( 16) 00:22:15.134 16103.131 - 16227.962: 52.8995% ( 16) 00:22:15.134 16227.962 - 16352.792: 53.1250% ( 14) 00:22:15.134 16352.792 - 16477.623: 53.3022% ( 11) 00:22:15.134 16477.623 - 16602.453: 53.4794% ( 11) 00:22:15.134 16602.453 - 16727.284: 53.5760% ( 6) 00:22:15.134 16727.284 - 16852.114: 53.6082% ( 2) 00:22:15.134 17351.436 - 17476.267: 53.6405% ( 2) 00:22:15.134 17476.267 - 17601.097: 53.7049% ( 4) 00:22:15.135 17601.097 - 17725.928: 53.7854% ( 5) 00:22:15.135 17725.928 - 17850.758: 54.0593% ( 17) 00:22:15.135 17850.758 - 17975.589: 54.1237% ( 4) 00:22:15.135 17975.589 - 18100.419: 54.2043% ( 5) 00:22:15.135 18100.419 - 18225.250: 54.3170% ( 7) 00:22:15.135 18225.250 - 18350.080: 54.3976% ( 5) 00:22:15.135 18350.080 - 18474.910: 54.4781% ( 5) 00:22:15.135 18474.910 - 18599.741: 54.5747% ( 6) 00:22:15.135 18599.741 - 18724.571: 54.6714% ( 6) 00:22:15.135 18724.571 - 18849.402: 54.7519% ( 5) 00:22:15.135 18849.402 - 18974.232: 54.8808% ( 8) 00:22:15.135 18974.232 - 19099.063: 55.0097% ( 8) 00:22:15.135 19099.063 - 19223.893: 55.1546% ( 9) 00:22:15.135 19223.893 - 19348.724: 55.2674% ( 7) 00:22:15.135 19348.724 - 19473.554: 55.4285% ( 10) 00:22:15.135 19473.554 - 19598.385: 55.5735% ( 9) 00:22:15.135 19598.385 - 19723.215: 55.6379% ( 4) 00:22:15.135 19723.215 - 19848.046: 55.6701% ( 2) 00:22:15.135 23592.960 - 23717.790: 55.7345% ( 4) 00:22:15.135 23717.790 - 23842.621: 55.7829% ( 3) 00:22:15.135 23842.621 - 23967.451: 55.8151% ( 2) 00:22:15.135 23967.451 - 24092.282: 55.8634% ( 3) 00:22:15.135 24092.282 - 24217.112: 55.9117% ( 3) 00:22:15.135 24217.112 - 24341.943: 55.9439% ( 2) 00:22:15.135 24341.943 - 24466.773: 55.9923% ( 3) 00:22:15.135 24466.773 - 24591.604: 56.0245% ( 2) 00:22:15.135 24591.604 - 24716.434: 56.0728% ( 3) 00:22:15.135 24716.434 - 24841.265: 56.1050% ( 2) 00:22:15.135 24841.265 - 24966.095: 56.1534% ( 3) 00:22:15.135 24966.095 - 25090.926: 56.1856% ( 2) 00:22:15.135 25090.926 - 25215.756: 56.2339% ( 3) 00:22:15.135 25215.756 - 25340.587: 56.2661% ( 2) 00:22:15.135 25340.587 - 25465.417: 56.3144% ( 3) 00:22:15.135 25465.417 - 25590.248: 56.3628% ( 3) 00:22:15.135 25590.248 - 25715.078: 56.3950% ( 2) 00:22:15.135 25715.078 - 25839.909: 56.4433% ( 3) 00:22:15.135 25839.909 - 25964.739: 56.4916% ( 3) 00:22:15.135 25964.739 - 26089.570: 56.5238% ( 2) 00:22:15.135 26089.570 - 26214.400: 56.5722% ( 3) 00:22:15.135 26214.400 - 26339.230: 56.6205% ( 3) 00:22:15.135 26339.230 - 26464.061: 56.6688% ( 3) 00:22:15.135 26464.061 - 26588.891: 56.7171% ( 3) 00:22:15.135 26588.891 - 26713.722: 56.7494% ( 2) 00:22:15.135 26713.722 - 26838.552: 56.7977% ( 3) 00:22:15.135 26838.552 - 26963.383: 56.8460% ( 3) 00:22:15.135 26963.383 - 27088.213: 56.8782% ( 2) 00:22:15.135 27088.213 - 27213.044: 56.9588% ( 5) 00:22:15.135 27213.044 - 27337.874: 57.0071% ( 3) 00:22:15.135 27337.874 - 27462.705: 57.0393% ( 2) 00:22:15.135 27462.705 - 27587.535: 57.1360% ( 6) 00:22:15.135 27587.535 - 27712.366: 57.2648% ( 8) 00:22:15.135 27712.366 - 27837.196: 57.3937% ( 8) 00:22:15.135 27837.196 - 27962.027: 57.4903% ( 6) 00:22:15.135 27962.027 - 28086.857: 57.6675% ( 11) 00:22:15.135 28086.857 - 28211.688: 57.8608% ( 12) 00:22:15.135 28211.688 - 28336.518: 58.1186% ( 16) 00:22:15.135 28336.518 - 28461.349: 58.5213% ( 25) 00:22:15.135 28461.349 - 28586.179: 58.9723% ( 28) 00:22:15.135 28586.179 - 28711.010: 59.5039% ( 33) 00:22:15.135 28711.010 - 28835.840: 60.0032% ( 31) 00:22:15.135 28835.840 - 28960.670: 60.5992% ( 37) 00:22:15.135 28960.670 - 29085.501: 61.1952% ( 37) 00:22:15.135 29085.501 - 29210.331: 62.0651% ( 54) 00:22:15.135 29210.331 - 29335.162: 63.1282% ( 66) 00:22:15.135 29335.162 - 29459.992: 64.4974% ( 85) 00:22:15.135 29459.992 - 29584.823: 67.1392% ( 164) 00:22:15.135 29584.823 - 29709.653: 69.9742% ( 176) 00:22:15.135 29709.653 - 29834.484: 71.3273% ( 84) 00:22:15.135 29834.484 - 29959.314: 73.0670% ( 108) 00:22:15.135 29959.314 - 30084.145: 75.0483% ( 123) 00:22:15.135 30084.145 - 30208.975: 77.0941% ( 127) 00:22:15.135 30208.975 - 30333.806: 79.0593% ( 122) 00:22:15.135 30333.806 - 30458.636: 83.6823% ( 287) 00:22:15.135 30458.636 - 30583.467: 86.8879% ( 199) 00:22:15.135 30583.467 - 30708.297: 87.9349% ( 65) 00:22:15.135 30708.297 - 30833.128: 88.8209% ( 55) 00:22:15.135 30833.128 - 30957.958: 89.6424% ( 51) 00:22:15.135 30957.958 - 31082.789: 90.5445% ( 56) 00:22:15.135 31082.789 - 31207.619: 91.5271% ( 61) 00:22:15.135 31207.619 - 31332.450: 92.6224% ( 68) 00:22:15.135 31332.450 - 31457.280: 93.7339% ( 69) 00:22:15.135 31457.280 - 31582.110: 94.2977% ( 35) 00:22:15.135 31582.110 - 31706.941: 94.8454% ( 34) 00:22:15.135 31706.941 - 31831.771: 95.9085% ( 66) 00:22:15.135 31831.771 - 31956.602: 96.6334% ( 45) 00:22:15.135 31956.602 - 32206.263: 97.3099% ( 42) 00:22:15.135 32206.263 - 32455.924: 97.6965% ( 24) 00:22:15.135 32455.924 - 32705.585: 98.0670% ( 23) 00:22:15.135 32705.585 - 32955.246: 98.3892% ( 20) 00:22:15.135 32955.246 - 33204.907: 98.6630% ( 17) 00:22:15.135 33204.907 - 33454.568: 98.9207% ( 16) 00:22:15.135 33454.568 - 33704.229: 99.1624% ( 15) 00:22:15.135 33704.229 - 33953.890: 99.2912% ( 8) 00:22:15.135 33953.890 - 34203.550: 99.3879% ( 6) 00:22:15.135 34203.550 - 34453.211: 99.4684% ( 5) 00:22:15.135 34453.211 - 34702.872: 99.5651% ( 6) 00:22:15.135 34702.872 - 34952.533: 99.6456% ( 5) 00:22:15.135 34952.533 - 35202.194: 99.7262% ( 5) 00:22:15.135 35202.194 - 35451.855: 99.8228% ( 6) 00:22:15.135 35451.855 - 35701.516: 99.9034% ( 5) 00:22:15.135 35701.516 - 35951.177: 100.0000% ( 6) 00:22:15.135 00:22:15.135 18:49:43 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:22:15.135 00:22:15.135 real 0m2.884s 00:22:15.135 user 0m2.337s 00:22:15.135 sys 0m0.432s 00:22:15.135 ************************************ 00:22:15.135 END TEST nvme_perf 00:22:15.135 ************************************ 00:22:15.135 18:49:43 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.135 18:49:43 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:22:15.394 18:49:43 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:22:15.394 18:49:43 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:15.394 18:49:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:15.394 18:49:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:15.394 ************************************ 00:22:15.394 START TEST nvme_hello_world 00:22:15.394 ************************************ 00:22:15.394 18:49:43 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:22:15.652 Initializing NVMe Controllers 00:22:15.652 Attached to 0000:00:10.0 00:22:15.652 Namespace ID: 1 size: 6GB 00:22:15.652 Attached to 0000:00:11.0 00:22:15.652 Namespace ID: 1 size: 5GB 00:22:15.652 Attached to 0000:00:13.0 00:22:15.652 Namespace ID: 1 size: 1GB 00:22:15.652 Attached to 0000:00:12.0 00:22:15.652 Namespace ID: 1 size: 4GB 00:22:15.652 Namespace ID: 2 size: 4GB 00:22:15.652 Namespace ID: 3 size: 4GB 00:22:15.652 Initialization complete. 00:22:15.652 INFO: using host memory buffer for IO 00:22:15.652 Hello world! 00:22:15.652 INFO: using host memory buffer for IO 00:22:15.652 Hello world! 00:22:15.652 INFO: using host memory buffer for IO 00:22:15.652 Hello world! 00:22:15.652 INFO: using host memory buffer for IO 00:22:15.652 Hello world! 00:22:15.652 INFO: using host memory buffer for IO 00:22:15.652 Hello world! 00:22:15.652 INFO: using host memory buffer for IO 00:22:15.652 Hello world! 00:22:15.652 00:22:15.652 real 0m0.372s 00:22:15.652 user 0m0.120s 00:22:15.652 sys 0m0.201s 00:22:15.652 18:49:44 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.652 ************************************ 00:22:15.652 END TEST nvme_hello_world 00:22:15.652 ************************************ 00:22:15.652 18:49:44 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:15.652 18:49:44 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:22:15.652 18:49:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:15.652 18:49:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:15.652 18:49:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:15.652 ************************************ 00:22:15.652 START TEST nvme_sgl 00:22:15.652 ************************************ 00:22:15.652 18:49:44 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:22:16.219 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:22:16.219 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:22:16.219 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:22:16.219 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:22:16.219 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:22:16.219 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:22:16.219 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:22:16.219 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:22:16.219 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:22:16.219 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:22:16.219 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:22:16.219 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:22:16.219 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:22:16.219 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:22:16.219 NVMe Readv/Writev Request test 00:22:16.219 Attached to 0000:00:10.0 00:22:16.219 Attached to 0000:00:11.0 00:22:16.219 Attached to 0000:00:13.0 00:22:16.219 Attached to 0000:00:12.0 00:22:16.219 0000:00:10.0: build_io_request_2 test passed 00:22:16.219 0000:00:10.0: build_io_request_4 test passed 00:22:16.219 0000:00:10.0: build_io_request_5 test passed 00:22:16.219 0000:00:10.0: build_io_request_6 test passed 00:22:16.219 0000:00:10.0: build_io_request_7 test passed 00:22:16.219 0000:00:10.0: build_io_request_10 test passed 00:22:16.219 0000:00:11.0: build_io_request_2 test passed 00:22:16.219 0000:00:11.0: build_io_request_4 test passed 00:22:16.219 0000:00:11.0: build_io_request_5 test passed 00:22:16.219 0000:00:11.0: build_io_request_6 test passed 00:22:16.219 0000:00:11.0: build_io_request_7 test passed 00:22:16.219 0000:00:11.0: build_io_request_10 test passed 00:22:16.219 Cleaning up... 00:22:16.219 00:22:16.219 real 0m0.488s 00:22:16.219 user 0m0.228s 00:22:16.219 sys 0m0.203s 00:22:16.219 ************************************ 00:22:16.219 END TEST nvme_sgl 00:22:16.219 ************************************ 00:22:16.219 18:49:44 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:16.219 18:49:44 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:22:16.219 18:49:44 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:22:16.219 18:49:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:16.219 18:49:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.219 18:49:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:16.219 ************************************ 00:22:16.219 START TEST nvme_e2edp 00:22:16.219 ************************************ 00:22:16.219 18:49:44 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:22:16.785 NVMe Write/Read with End-to-End data protection test 00:22:16.785 Attached to 0000:00:10.0 00:22:16.785 Attached to 0000:00:11.0 00:22:16.785 Attached to 0000:00:13.0 00:22:16.785 Attached to 0000:00:12.0 00:22:16.785 Cleaning up... 00:22:16.785 00:22:16.785 real 0m0.384s 00:22:16.785 user 0m0.131s 00:22:16.785 sys 0m0.206s 00:22:16.785 ************************************ 00:22:16.785 END TEST nvme_e2edp 00:22:16.785 ************************************ 00:22:16.785 18:49:45 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:16.785 18:49:45 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:22:16.785 18:49:45 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:22:16.785 18:49:45 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:16.785 18:49:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.785 18:49:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:16.785 ************************************ 00:22:16.785 START TEST nvme_reserve 00:22:16.785 ************************************ 00:22:16.785 18:49:45 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:22:17.044 ===================================================== 00:22:17.044 NVMe Controller at PCI bus 0, device 16, function 0 00:22:17.044 ===================================================== 00:22:17.044 Reservations: Not Supported 00:22:17.044 ===================================================== 00:22:17.044 NVMe Controller at PCI bus 0, device 17, function 0 00:22:17.044 ===================================================== 00:22:17.044 Reservations: Not Supported 00:22:17.044 ===================================================== 00:22:17.044 NVMe Controller at PCI bus 0, device 19, function 0 00:22:17.044 ===================================================== 00:22:17.044 Reservations: Not Supported 00:22:17.044 ===================================================== 00:22:17.044 NVMe Controller at PCI bus 0, device 18, function 0 00:22:17.044 ===================================================== 00:22:17.044 Reservations: Not Supported 00:22:17.044 Reservation test passed 00:22:17.044 00:22:17.044 real 0m0.371s 00:22:17.044 user 0m0.141s 00:22:17.044 sys 0m0.188s 00:22:17.044 ************************************ 00:22:17.044 END TEST nvme_reserve 00:22:17.044 ************************************ 00:22:17.044 18:49:45 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.044 18:49:45 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:22:17.044 18:49:45 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:22:17.044 18:49:45 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:17.044 18:49:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.044 18:49:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:17.044 ************************************ 00:22:17.044 START TEST nvme_err_injection 00:22:17.044 ************************************ 00:22:17.044 18:49:45 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:22:17.612 NVMe Error Injection test 00:22:17.612 Attached to 0000:00:10.0 00:22:17.612 Attached to 0000:00:11.0 00:22:17.612 Attached to 0000:00:13.0 00:22:17.612 Attached to 0000:00:12.0 00:22:17.612 0000:00:13.0: get features failed as expected 00:22:17.612 0000:00:12.0: get features failed as expected 00:22:17.612 0000:00:10.0: get features failed as expected 00:22:17.612 0000:00:11.0: get features failed as expected 00:22:17.612 0000:00:10.0: get features successfully as expected 00:22:17.612 0000:00:11.0: get features successfully as expected 00:22:17.612 0000:00:13.0: get features successfully as expected 00:22:17.612 0000:00:12.0: get features successfully as expected 00:22:17.612 0000:00:11.0: read failed as expected 00:22:17.612 0000:00:13.0: read failed as expected 00:22:17.612 0000:00:10.0: read failed as expected 00:22:17.612 0000:00:12.0: read failed as expected 00:22:17.612 0000:00:10.0: read successfully as expected 00:22:17.612 0000:00:11.0: read successfully as expected 00:22:17.612 0000:00:13.0: read successfully as expected 00:22:17.612 0000:00:12.0: read successfully as expected 00:22:17.612 Cleaning up... 00:22:17.612 00:22:17.612 real 0m0.392s 00:22:17.612 user 0m0.139s 00:22:17.612 sys 0m0.206s 00:22:17.612 ************************************ 00:22:17.612 END TEST nvme_err_injection 00:22:17.612 ************************************ 00:22:17.612 18:49:46 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.612 18:49:46 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:22:17.612 18:49:46 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:22:17.612 18:49:46 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:22:17.612 18:49:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:17.612 18:49:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:17.612 ************************************ 00:22:17.612 START TEST nvme_overhead 00:22:17.612 ************************************ 00:22:17.612 18:49:46 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:22:18.990 Initializing NVMe Controllers 00:22:18.990 Attached to 0000:00:10.0 00:22:18.990 Attached to 0000:00:11.0 00:22:18.990 Attached to 0000:00:13.0 00:22:18.990 Attached to 0000:00:12.0 00:22:18.990 Initialization complete. Launching workers. 00:22:18.990 submit (in ns) avg, min, max = 15916.8, 12573.3, 70592.4 00:22:18.990 complete (in ns) avg, min, max = 10663.4, 7944.8, 50161.9 00:22:18.990 00:22:18.990 Submit histogram 00:22:18.990 ================ 00:22:18.990 Range in us Cumulative Count 00:22:18.990 12.556 - 12.617: 0.0326% ( 3) 00:22:18.990 12.617 - 12.678: 0.0652% ( 3) 00:22:18.990 12.678 - 12.739: 0.1196% ( 5) 00:22:18.990 12.739 - 12.800: 0.1848% ( 6) 00:22:18.990 12.800 - 12.861: 0.2718% ( 8) 00:22:18.990 12.861 - 12.922: 0.3153% ( 4) 00:22:18.990 12.922 - 12.983: 0.4240% ( 10) 00:22:18.990 12.983 - 13.044: 0.4566% ( 3) 00:22:18.990 13.044 - 13.105: 0.5001% ( 4) 00:22:18.990 13.105 - 13.166: 0.5436% ( 4) 00:22:18.990 13.166 - 13.227: 0.6088% ( 6) 00:22:18.990 13.227 - 13.288: 0.7175% ( 10) 00:22:18.990 13.288 - 13.349: 0.8589% ( 13) 00:22:18.990 13.349 - 13.410: 1.1089% ( 23) 00:22:18.990 13.410 - 13.470: 1.7178% ( 56) 00:22:18.990 13.470 - 13.531: 2.6201% ( 83) 00:22:18.990 13.531 - 13.592: 3.5660% ( 87) 00:22:18.990 13.592 - 13.653: 4.8271% ( 116) 00:22:18.990 13.653 - 13.714: 6.2840% ( 134) 00:22:18.990 13.714 - 13.775: 7.8278% ( 142) 00:22:18.990 13.775 - 13.836: 9.2629% ( 132) 00:22:18.990 13.836 - 13.897: 10.7306% ( 135) 00:22:18.990 13.897 - 13.958: 12.0678% ( 123) 00:22:18.990 13.958 - 14.019: 13.4268% ( 125) 00:22:18.990 14.019 - 14.080: 14.6227% ( 110) 00:22:18.990 14.080 - 14.141: 16.5797% ( 180) 00:22:18.990 14.141 - 14.202: 19.2977% ( 250) 00:22:18.990 14.202 - 14.263: 22.0700% ( 255) 00:22:18.990 14.263 - 14.324: 24.7989% ( 251) 00:22:18.990 14.324 - 14.385: 28.1148% ( 305) 00:22:18.990 14.385 - 14.446: 31.2459% ( 288) 00:22:18.990 14.446 - 14.507: 35.4316% ( 385) 00:22:18.990 14.507 - 14.568: 40.3240% ( 450) 00:22:18.990 14.568 - 14.629: 45.2707% ( 455) 00:22:18.990 14.629 - 14.690: 49.5760% ( 396) 00:22:18.990 14.690 - 14.750: 53.3377% ( 346) 00:22:18.990 14.750 - 14.811: 56.7406% ( 313) 00:22:18.990 14.811 - 14.872: 59.6978% ( 272) 00:22:18.990 14.872 - 14.933: 62.5679% ( 264) 00:22:18.990 14.933 - 14.994: 64.9924% ( 223) 00:22:18.990 14.994 - 15.055: 67.3407% ( 216) 00:22:18.990 15.055 - 15.116: 69.3085% ( 181) 00:22:18.990 15.116 - 15.177: 70.9828% ( 154) 00:22:18.990 15.177 - 15.238: 72.4940% ( 139) 00:22:18.990 15.238 - 15.299: 73.5921% ( 101) 00:22:18.990 15.299 - 15.360: 74.7989% ( 111) 00:22:18.990 15.360 - 15.421: 75.6686% ( 80) 00:22:18.990 15.421 - 15.482: 76.1905% ( 48) 00:22:18.990 15.482 - 15.543: 76.7123% ( 48) 00:22:18.990 15.543 - 15.604: 77.0059% ( 27) 00:22:18.990 15.604 - 15.726: 77.7343% ( 67) 00:22:18.990 15.726 - 15.848: 78.2344% ( 46) 00:22:18.990 15.848 - 15.970: 78.5388% ( 28) 00:22:18.990 15.970 - 16.091: 78.7345% ( 18) 00:22:18.990 16.091 - 16.213: 78.8867% ( 14) 00:22:18.990 16.213 - 16.335: 78.9411% ( 5) 00:22:18.990 16.335 - 16.457: 79.0063% ( 6) 00:22:18.990 16.457 - 16.579: 79.0172% ( 1) 00:22:18.990 16.579 - 16.701: 79.0607% ( 4) 00:22:18.990 16.701 - 16.823: 79.1042% ( 4) 00:22:18.990 16.823 - 16.945: 79.1368% ( 3) 00:22:18.990 16.945 - 17.067: 79.1476% ( 1) 00:22:18.990 17.067 - 17.189: 79.2020% ( 5) 00:22:18.990 17.189 - 17.310: 79.2455% ( 4) 00:22:18.990 17.310 - 17.432: 79.2672% ( 2) 00:22:18.990 17.432 - 17.554: 79.3542% ( 8) 00:22:18.990 17.554 - 17.676: 79.3977% ( 4) 00:22:18.990 17.676 - 17.798: 79.4629% ( 6) 00:22:18.990 17.798 - 17.920: 79.5716% ( 10) 00:22:18.990 17.920 - 18.042: 79.6043% ( 3) 00:22:18.990 18.042 - 18.164: 79.6804% ( 7) 00:22:18.990 18.164 - 18.286: 79.7782% ( 9) 00:22:18.990 18.286 - 18.408: 79.8652% ( 8) 00:22:18.990 18.408 - 18.530: 79.9739% ( 10) 00:22:18.990 18.530 - 18.651: 80.0935% ( 11) 00:22:18.990 18.651 - 18.773: 80.2022% ( 10) 00:22:18.990 18.773 - 18.895: 80.4088% ( 19) 00:22:18.990 18.895 - 19.017: 81.3873% ( 90) 00:22:18.990 19.017 - 19.139: 83.8769% ( 229) 00:22:18.990 19.139 - 19.261: 86.3122% ( 224) 00:22:18.990 19.261 - 19.383: 88.3018% ( 183) 00:22:18.990 19.383 - 19.505: 89.6282% ( 122) 00:22:18.990 19.505 - 19.627: 90.6175% ( 91) 00:22:18.990 19.627 - 19.749: 91.3242% ( 65) 00:22:18.990 19.749 - 19.870: 91.9113% ( 54) 00:22:18.990 19.870 - 19.992: 92.3135% ( 37) 00:22:18.990 19.992 - 20.114: 92.6832% ( 34) 00:22:18.990 20.114 - 20.236: 92.8789% ( 18) 00:22:18.990 20.236 - 20.358: 93.0420% ( 15) 00:22:18.990 20.358 - 20.480: 93.2485% ( 19) 00:22:18.990 20.480 - 20.602: 93.4007% ( 14) 00:22:18.990 20.602 - 20.724: 93.5964% ( 18) 00:22:18.990 20.724 - 20.846: 93.8030% ( 19) 00:22:18.990 20.846 - 20.968: 93.9770% ( 16) 00:22:18.990 20.968 - 21.090: 94.0965% ( 11) 00:22:18.990 21.090 - 21.211: 94.2814% ( 17) 00:22:18.990 21.211 - 21.333: 94.4771% ( 18) 00:22:18.990 21.333 - 21.455: 94.6619% ( 17) 00:22:18.990 21.455 - 21.577: 94.8250% ( 15) 00:22:18.990 21.577 - 21.699: 94.9446% ( 11) 00:22:18.990 21.699 - 21.821: 95.0968% ( 14) 00:22:18.990 21.821 - 21.943: 95.3142% ( 20) 00:22:18.990 21.943 - 22.065: 95.4120% ( 9) 00:22:18.990 22.065 - 22.187: 95.5316% ( 11) 00:22:18.990 22.187 - 22.309: 95.6621% ( 12) 00:22:18.990 22.309 - 22.430: 95.8034% ( 13) 00:22:18.990 22.430 - 22.552: 95.9013% ( 9) 00:22:18.990 22.552 - 22.674: 96.0209% ( 11) 00:22:18.990 22.674 - 22.796: 96.1405% ( 11) 00:22:18.990 22.796 - 22.918: 96.2383% ( 9) 00:22:18.990 22.918 - 23.040: 96.3579% ( 11) 00:22:18.990 23.040 - 23.162: 96.4014% ( 4) 00:22:18.990 23.162 - 23.284: 96.4992% ( 9) 00:22:18.990 23.284 - 23.406: 96.5753% ( 7) 00:22:18.990 23.406 - 23.528: 96.6406% ( 6) 00:22:18.990 23.528 - 23.650: 96.7493% ( 10) 00:22:18.990 23.650 - 23.771: 96.8254% ( 7) 00:22:18.990 23.771 - 23.893: 96.8471% ( 2) 00:22:18.990 23.893 - 24.015: 96.9124% ( 6) 00:22:18.990 24.015 - 24.137: 96.9776% ( 6) 00:22:18.990 24.137 - 24.259: 97.0102% ( 3) 00:22:18.990 24.259 - 24.381: 97.0537% ( 4) 00:22:18.990 24.381 - 24.503: 97.1298% ( 7) 00:22:18.990 24.503 - 24.625: 97.1950% ( 6) 00:22:18.990 24.625 - 24.747: 97.2603% ( 6) 00:22:18.990 24.747 - 24.869: 97.3581% ( 9) 00:22:18.990 24.869 - 24.990: 97.4125% ( 5) 00:22:18.990 24.990 - 25.112: 97.4560% ( 4) 00:22:18.990 25.112 - 25.234: 97.5212% ( 6) 00:22:18.990 25.234 - 25.356: 97.6082% ( 8) 00:22:18.990 25.356 - 25.478: 97.6843% ( 7) 00:22:18.990 25.478 - 25.600: 97.8039% ( 11) 00:22:18.990 25.600 - 25.722: 97.9452% ( 13) 00:22:18.990 25.722 - 25.844: 98.0213% ( 7) 00:22:18.990 25.844 - 25.966: 98.1300% ( 10) 00:22:18.990 25.966 - 26.088: 98.2387% ( 10) 00:22:18.990 26.088 - 26.210: 98.3257% ( 8) 00:22:18.990 26.210 - 26.331: 98.3692% ( 4) 00:22:18.990 26.331 - 26.453: 98.4236% ( 5) 00:22:18.990 26.453 - 26.575: 98.4453% ( 2) 00:22:18.990 26.575 - 26.697: 98.4779% ( 3) 00:22:18.990 26.697 - 26.819: 98.5105% ( 3) 00:22:18.991 26.819 - 26.941: 98.5214% ( 1) 00:22:18.991 26.941 - 27.063: 98.5758% ( 5) 00:22:18.991 27.063 - 27.185: 98.5975% ( 2) 00:22:18.991 27.185 - 27.307: 98.6193% ( 2) 00:22:18.991 27.307 - 27.429: 98.6410% ( 2) 00:22:18.991 27.429 - 27.550: 98.6519% ( 1) 00:22:18.991 27.672 - 27.794: 98.6845% ( 3) 00:22:18.991 27.794 - 27.916: 98.6954% ( 1) 00:22:18.991 27.916 - 28.038: 98.7062% ( 1) 00:22:18.991 28.160 - 28.282: 98.7389% ( 3) 00:22:18.991 28.404 - 28.526: 98.7497% ( 1) 00:22:18.991 28.648 - 28.770: 98.7715% ( 2) 00:22:18.991 29.013 - 29.135: 98.7932% ( 2) 00:22:18.991 29.257 - 29.379: 98.8150% ( 2) 00:22:18.991 29.379 - 29.501: 98.8367% ( 2) 00:22:18.991 29.501 - 29.623: 98.8584% ( 2) 00:22:18.991 29.623 - 29.745: 98.8911% ( 3) 00:22:18.991 29.867 - 29.989: 98.9237% ( 3) 00:22:18.991 29.989 - 30.110: 98.9998% ( 7) 00:22:18.991 30.110 - 30.232: 99.0107% ( 1) 00:22:18.991 30.232 - 30.354: 99.0541% ( 4) 00:22:18.991 30.354 - 30.476: 99.0976% ( 4) 00:22:18.991 30.476 - 30.598: 99.1302% ( 3) 00:22:18.991 30.598 - 30.720: 99.1629% ( 3) 00:22:18.991 30.720 - 30.842: 99.2498% ( 8) 00:22:18.991 30.842 - 30.964: 99.2607% ( 1) 00:22:18.991 30.964 - 31.086: 99.3368% ( 7) 00:22:18.991 31.086 - 31.208: 99.3477% ( 1) 00:22:18.991 31.208 - 31.451: 99.4129% ( 6) 00:22:18.991 31.451 - 31.695: 99.4781% ( 6) 00:22:18.991 31.695 - 31.939: 99.4890% ( 1) 00:22:18.991 31.939 - 32.183: 99.5434% ( 5) 00:22:18.991 32.183 - 32.427: 99.5869% ( 4) 00:22:18.991 32.670 - 32.914: 99.6086% ( 2) 00:22:18.991 32.914 - 33.158: 99.6195% ( 1) 00:22:18.991 33.158 - 33.402: 99.6304% ( 1) 00:22:18.991 33.402 - 33.646: 99.6412% ( 1) 00:22:18.991 33.646 - 33.890: 99.6521% ( 1) 00:22:18.991 33.890 - 34.133: 99.6630% ( 1) 00:22:18.991 34.133 - 34.377: 99.6738% ( 1) 00:22:18.991 34.377 - 34.621: 99.6847% ( 1) 00:22:18.991 34.621 - 34.865: 99.7065% ( 2) 00:22:18.991 34.865 - 35.109: 99.7391% ( 3) 00:22:18.991 35.109 - 35.352: 99.7499% ( 1) 00:22:18.991 35.352 - 35.596: 99.7608% ( 1) 00:22:18.991 35.596 - 35.840: 99.7717% ( 1) 00:22:18.991 35.840 - 36.084: 99.7826% ( 1) 00:22:18.991 36.084 - 36.328: 99.7934% ( 1) 00:22:18.991 36.328 - 36.571: 99.8043% ( 1) 00:22:18.991 36.571 - 36.815: 99.8260% ( 2) 00:22:18.991 38.766 - 39.010: 99.8369% ( 1) 00:22:18.991 39.010 - 39.253: 99.8587% ( 2) 00:22:18.991 39.253 - 39.497: 99.8695% ( 1) 00:22:18.991 40.960 - 41.204: 99.8913% ( 2) 00:22:18.991 41.204 - 41.448: 99.9022% ( 1) 00:22:18.991 42.179 - 42.423: 99.9130% ( 1) 00:22:18.991 42.667 - 42.910: 99.9348% ( 2) 00:22:18.991 43.886 - 44.130: 99.9456% ( 1) 00:22:18.991 45.592 - 45.836: 99.9565% ( 1) 00:22:18.991 48.518 - 48.762: 99.9674% ( 1) 00:22:18.991 59.002 - 59.246: 99.9783% ( 1) 00:22:18.991 68.267 - 68.754: 99.9891% ( 1) 00:22:18.991 70.217 - 70.705: 100.0000% ( 1) 00:22:18.991 00:22:18.991 Complete histogram 00:22:18.991 ================== 00:22:18.991 Range in us Cumulative Count 00:22:18.991 7.924 - 7.985: 0.0217% ( 2) 00:22:18.991 7.985 - 8.046: 0.0978% ( 7) 00:22:18.991 8.046 - 8.107: 0.1957% ( 9) 00:22:18.991 8.107 - 8.168: 0.2392% ( 4) 00:22:18.991 8.168 - 8.229: 0.2827% ( 4) 00:22:18.991 8.229 - 8.290: 0.3588% ( 7) 00:22:18.991 8.290 - 8.350: 0.4240% ( 6) 00:22:18.991 8.350 - 8.411: 0.4784% ( 5) 00:22:18.991 8.411 - 8.472: 0.4892% ( 1) 00:22:18.991 8.472 - 8.533: 0.5219% ( 3) 00:22:18.991 8.533 - 8.594: 0.5545% ( 3) 00:22:18.991 8.594 - 8.655: 0.7719% ( 20) 00:22:18.991 8.655 - 8.716: 1.9135% ( 105) 00:22:18.991 8.716 - 8.777: 3.7182% ( 166) 00:22:18.991 8.777 - 8.838: 5.1424% ( 131) 00:22:18.991 8.838 - 8.899: 5.9361% ( 73) 00:22:18.991 8.899 - 8.960: 6.5775% ( 59) 00:22:18.991 8.960 - 9.021: 7.2190% ( 59) 00:22:18.991 9.021 - 9.082: 8.4257% ( 111) 00:22:18.991 9.082 - 9.143: 9.6978% ( 117) 00:22:18.991 9.143 - 9.204: 10.7741% ( 99) 00:22:18.991 9.204 - 9.265: 14.6010% ( 352) 00:22:18.991 9.265 - 9.326: 20.1783% ( 513) 00:22:18.991 9.326 - 9.387: 25.1903% ( 461) 00:22:18.991 9.387 - 9.448: 28.5388% ( 308) 00:22:18.991 9.448 - 9.509: 31.3112% ( 255) 00:22:18.991 9.509 - 9.570: 34.4423% ( 288) 00:22:18.991 9.570 - 9.630: 39.6173% ( 476) 00:22:18.991 9.630 - 9.691: 45.2055% ( 514) 00:22:18.991 9.691 - 9.752: 49.5325% ( 398) 00:22:18.991 9.752 - 9.813: 53.9791% ( 409) 00:22:18.991 9.813 - 9.874: 57.1646% ( 293) 00:22:18.991 9.874 - 9.935: 60.7850% ( 333) 00:22:18.991 9.935 - 9.996: 64.7641% ( 366) 00:22:18.991 9.996 - 10.057: 67.7756% ( 277) 00:22:18.991 10.057 - 10.118: 70.2979% ( 232) 00:22:18.991 10.118 - 10.179: 72.4288% ( 196) 00:22:18.991 10.179 - 10.240: 73.9943% ( 144) 00:22:18.991 10.240 - 10.301: 75.3098% ( 121) 00:22:18.991 10.301 - 10.362: 76.2775% ( 89) 00:22:18.991 10.362 - 10.423: 77.1581% ( 81) 00:22:18.991 10.423 - 10.484: 77.7234% ( 52) 00:22:18.991 10.484 - 10.545: 78.2453% ( 48) 00:22:18.991 10.545 - 10.606: 78.6801% ( 40) 00:22:18.991 10.606 - 10.667: 79.0280% ( 32) 00:22:18.991 10.667 - 10.728: 79.3107% ( 26) 00:22:18.991 10.728 - 10.789: 79.5173% ( 19) 00:22:18.991 10.789 - 10.850: 79.7130% ( 18) 00:22:18.991 10.850 - 10.910: 79.8326% ( 11) 00:22:18.991 10.910 - 10.971: 79.9304% ( 9) 00:22:18.991 10.971 - 11.032: 80.0065% ( 7) 00:22:18.991 11.032 - 11.093: 80.0935% ( 8) 00:22:18.991 11.093 - 11.154: 80.1587% ( 6) 00:22:18.991 11.154 - 11.215: 80.2240% ( 6) 00:22:18.991 11.215 - 11.276: 80.3762% ( 14) 00:22:18.991 11.276 - 11.337: 80.4197% ( 4) 00:22:18.991 11.337 - 11.398: 80.5284% ( 10) 00:22:18.991 11.398 - 11.459: 80.5827% ( 5) 00:22:18.991 11.459 - 11.520: 80.6588% ( 7) 00:22:18.991 11.520 - 11.581: 80.7132% ( 5) 00:22:18.991 11.581 - 11.642: 80.8002% ( 8) 00:22:18.991 11.642 - 11.703: 80.8871% ( 8) 00:22:18.991 11.703 - 11.764: 80.9306% ( 4) 00:22:18.991 11.764 - 11.825: 80.9633% ( 3) 00:22:18.991 11.825 - 11.886: 80.9850% ( 2) 00:22:18.991 11.886 - 11.947: 81.0285% ( 4) 00:22:18.991 11.947 - 12.008: 81.0611% ( 3) 00:22:18.991 12.008 - 12.069: 81.0937% ( 3) 00:22:18.991 12.069 - 12.130: 81.1481% ( 5) 00:22:18.991 12.130 - 12.190: 81.1698% ( 2) 00:22:18.991 12.190 - 12.251: 81.1807% ( 1) 00:22:18.991 12.251 - 12.312: 81.2568% ( 7) 00:22:18.991 12.312 - 12.373: 81.2894% ( 3) 00:22:18.991 12.373 - 12.434: 81.3546% ( 6) 00:22:18.991 12.434 - 12.495: 81.3764% ( 2) 00:22:18.991 12.495 - 12.556: 81.7786% ( 37) 00:22:18.991 12.556 - 12.617: 83.0180% ( 114) 00:22:18.991 12.617 - 12.678: 84.9967% ( 182) 00:22:18.991 12.678 - 12.739: 86.4753% ( 136) 00:22:18.991 12.739 - 12.800: 87.8343% ( 125) 00:22:18.991 12.800 - 12.861: 88.6715% ( 77) 00:22:18.991 12.861 - 12.922: 89.2477% ( 53) 00:22:18.991 12.922 - 12.983: 89.7695% ( 48) 00:22:18.991 12.983 - 13.044: 90.0848% ( 29) 00:22:18.991 13.044 - 13.105: 90.5088% ( 39) 00:22:18.991 13.105 - 13.166: 91.0307% ( 48) 00:22:18.991 13.166 - 13.227: 91.4438% ( 38) 00:22:18.991 13.227 - 13.288: 91.7699% ( 30) 00:22:18.991 13.288 - 13.349: 91.9548% ( 17) 00:22:18.991 13.349 - 13.410: 92.2374% ( 26) 00:22:18.991 13.410 - 13.470: 92.3679% ( 12) 00:22:18.991 13.470 - 13.531: 92.5310% ( 15) 00:22:18.991 13.531 - 13.592: 92.6180% ( 8) 00:22:18.991 13.592 - 13.653: 92.6941% ( 7) 00:22:18.991 13.653 - 13.714: 92.8137% ( 11) 00:22:18.991 13.714 - 13.775: 92.8680% ( 5) 00:22:18.991 13.775 - 13.836: 92.9441% ( 7) 00:22:18.991 13.836 - 13.897: 93.1507% ( 19) 00:22:18.991 13.897 - 13.958: 93.2594% ( 10) 00:22:18.991 13.958 - 14.019: 93.3573% ( 9) 00:22:18.991 14.019 - 14.080: 93.4768% ( 11) 00:22:18.991 14.080 - 14.141: 93.5638% ( 8) 00:22:18.991 14.141 - 14.202: 93.6508% ( 8) 00:22:18.991 14.202 - 14.263: 93.7704% ( 11) 00:22:18.991 14.263 - 14.324: 93.9008% ( 12) 00:22:18.991 14.324 - 14.385: 94.0204% ( 11) 00:22:18.991 14.385 - 14.446: 94.1183% ( 9) 00:22:18.991 14.446 - 14.507: 94.2379% ( 11) 00:22:18.991 14.507 - 14.568: 94.3249% ( 8) 00:22:18.991 14.568 - 14.629: 94.4118% ( 8) 00:22:18.991 14.629 - 14.690: 94.5423% ( 12) 00:22:18.991 14.690 - 14.750: 94.6075% ( 6) 00:22:18.991 14.750 - 14.811: 94.6836% ( 7) 00:22:18.991 14.811 - 14.872: 94.7054% ( 2) 00:22:18.991 14.872 - 14.933: 94.7597% ( 5) 00:22:18.991 14.933 - 14.994: 94.8684% ( 10) 00:22:18.991 14.994 - 15.055: 94.9011% ( 3) 00:22:18.991 15.055 - 15.116: 95.0641% ( 15) 00:22:18.991 15.177 - 15.238: 95.1185% ( 5) 00:22:18.991 15.238 - 15.299: 95.1729% ( 5) 00:22:18.991 15.299 - 15.360: 95.3033% ( 12) 00:22:18.991 15.360 - 15.421: 95.3468% ( 4) 00:22:18.991 15.421 - 15.482: 95.4012% ( 5) 00:22:18.991 15.482 - 15.543: 95.4773% ( 7) 00:22:18.991 15.543 - 15.604: 95.4990% ( 2) 00:22:18.991 15.604 - 15.726: 95.6947% ( 18) 00:22:18.991 15.726 - 15.848: 95.7926% ( 9) 00:22:18.991 15.848 - 15.970: 95.9013% ( 10) 00:22:18.991 15.970 - 16.091: 96.0426% ( 13) 00:22:18.991 16.091 - 16.213: 96.1622% ( 11) 00:22:18.991 16.213 - 16.335: 96.2274% ( 6) 00:22:18.991 16.335 - 16.457: 96.3035% ( 7) 00:22:18.991 16.457 - 16.579: 96.3905% ( 8) 00:22:18.991 16.579 - 16.701: 96.4558% ( 6) 00:22:18.991 16.701 - 16.823: 96.5319% ( 7) 00:22:18.992 16.823 - 16.945: 96.6188% ( 8) 00:22:18.992 16.945 - 17.067: 96.7058% ( 8) 00:22:18.992 17.067 - 17.189: 96.8037% ( 9) 00:22:18.992 17.189 - 17.310: 96.8689% ( 6) 00:22:18.992 17.310 - 17.432: 96.9124% ( 4) 00:22:18.992 17.432 - 17.554: 97.0102% ( 9) 00:22:18.992 17.554 - 17.676: 97.0428% ( 3) 00:22:18.992 17.676 - 17.798: 97.0863% ( 4) 00:22:18.992 17.798 - 17.920: 97.2168% ( 12) 00:22:18.992 17.920 - 18.042: 97.2494% ( 3) 00:22:18.992 18.042 - 18.164: 97.3038% ( 5) 00:22:18.992 18.164 - 18.286: 97.3690% ( 6) 00:22:18.992 18.286 - 18.408: 97.4234% ( 5) 00:22:18.992 18.408 - 18.530: 97.4560% ( 3) 00:22:18.992 18.530 - 18.651: 97.5103% ( 5) 00:22:18.992 18.651 - 18.773: 97.5429% ( 3) 00:22:18.992 18.773 - 18.895: 97.5647% ( 2) 00:22:18.992 18.895 - 19.017: 97.5756% ( 1) 00:22:18.992 19.017 - 19.139: 97.6734% ( 9) 00:22:18.992 19.139 - 19.261: 97.7060% ( 3) 00:22:18.992 19.261 - 19.383: 97.7386% ( 3) 00:22:18.992 19.383 - 19.505: 97.7495% ( 1) 00:22:18.992 19.505 - 19.627: 97.7930% ( 4) 00:22:18.992 19.627 - 19.749: 97.8147% ( 2) 00:22:18.992 19.870 - 19.992: 97.8474% ( 3) 00:22:18.992 19.992 - 20.114: 97.8908% ( 4) 00:22:18.992 20.114 - 20.236: 97.9126% ( 2) 00:22:18.992 20.236 - 20.358: 97.9669% ( 5) 00:22:18.992 20.358 - 20.480: 97.9996% ( 3) 00:22:18.992 20.480 - 20.602: 98.0648% ( 6) 00:22:18.992 20.602 - 20.724: 98.1192% ( 5) 00:22:18.992 20.724 - 20.846: 98.2170% ( 9) 00:22:18.992 20.846 - 20.968: 98.3366% ( 11) 00:22:18.992 20.968 - 21.090: 98.4127% ( 7) 00:22:18.992 21.090 - 21.211: 98.4453% ( 3) 00:22:18.992 21.211 - 21.333: 98.5432% ( 9) 00:22:18.992 21.333 - 21.455: 98.5649% ( 2) 00:22:18.992 21.455 - 21.577: 98.5758% ( 1) 00:22:18.992 21.577 - 21.699: 98.5975% ( 2) 00:22:18.992 21.699 - 21.821: 98.6193% ( 2) 00:22:18.992 21.821 - 21.943: 98.6410% ( 2) 00:22:18.992 21.943 - 22.065: 98.6628% ( 2) 00:22:18.992 22.187 - 22.309: 98.6736% ( 1) 00:22:18.992 22.309 - 22.430: 98.6954% ( 2) 00:22:18.992 22.430 - 22.552: 98.7280% ( 3) 00:22:18.992 22.674 - 22.796: 98.7497% ( 2) 00:22:18.992 22.918 - 23.040: 98.7606% ( 1) 00:22:18.992 23.040 - 23.162: 98.7715% ( 1) 00:22:18.992 23.406 - 23.528: 98.7932% ( 2) 00:22:18.992 23.528 - 23.650: 98.8041% ( 1) 00:22:18.992 23.650 - 23.771: 98.8150% ( 1) 00:22:18.992 23.771 - 23.893: 98.8476% ( 3) 00:22:18.992 23.893 - 24.015: 98.8911% ( 4) 00:22:18.992 24.015 - 24.137: 98.9128% ( 2) 00:22:18.992 24.137 - 24.259: 98.9237% ( 1) 00:22:18.992 24.259 - 24.381: 98.9672% ( 4) 00:22:18.992 24.381 - 24.503: 99.0324% ( 6) 00:22:18.992 24.625 - 24.747: 99.0541% ( 2) 00:22:18.992 24.747 - 24.869: 99.0650% ( 1) 00:22:18.992 24.869 - 24.990: 99.0976% ( 3) 00:22:18.992 24.990 - 25.112: 99.1194% ( 2) 00:22:18.992 25.356 - 25.478: 99.1411% ( 2) 00:22:18.992 25.478 - 25.600: 99.1846% ( 4) 00:22:18.992 25.600 - 25.722: 99.1955% ( 1) 00:22:18.992 25.722 - 25.844: 99.2063% ( 1) 00:22:18.992 25.844 - 25.966: 99.2172% ( 1) 00:22:18.992 25.966 - 26.088: 99.2716% ( 5) 00:22:18.992 26.088 - 26.210: 99.3042% ( 3) 00:22:18.992 26.210 - 26.331: 99.3477% ( 4) 00:22:18.992 26.331 - 26.453: 99.4020% ( 5) 00:22:18.992 26.453 - 26.575: 99.4238% ( 2) 00:22:18.992 26.575 - 26.697: 99.4347% ( 1) 00:22:18.992 26.697 - 26.819: 99.4564% ( 2) 00:22:18.992 26.819 - 26.941: 99.4890% ( 3) 00:22:18.992 26.941 - 27.063: 99.5108% ( 2) 00:22:18.992 27.063 - 27.185: 99.5216% ( 1) 00:22:18.992 27.185 - 27.307: 99.5434% ( 2) 00:22:18.992 27.307 - 27.429: 99.5651% ( 2) 00:22:18.992 27.429 - 27.550: 99.5869% ( 2) 00:22:18.992 27.550 - 27.672: 99.6086% ( 2) 00:22:18.992 27.916 - 28.038: 99.6195% ( 1) 00:22:18.992 28.038 - 28.160: 99.6304% ( 1) 00:22:18.992 28.404 - 28.526: 99.6521% ( 2) 00:22:18.992 28.770 - 28.891: 99.6738% ( 2) 00:22:18.992 28.891 - 29.013: 99.6847% ( 1) 00:22:18.992 29.501 - 29.623: 99.6956% ( 1) 00:22:18.992 29.867 - 29.989: 99.7065% ( 1) 00:22:18.992 29.989 - 30.110: 99.7173% ( 1) 00:22:18.992 30.476 - 30.598: 99.7282% ( 1) 00:22:18.992 30.842 - 30.964: 99.7391% ( 1) 00:22:18.992 31.939 - 32.183: 99.7499% ( 1) 00:22:18.992 32.183 - 32.427: 99.7608% ( 1) 00:22:18.992 32.670 - 32.914: 99.7717% ( 1) 00:22:18.992 32.914 - 33.158: 99.7934% ( 2) 00:22:18.992 33.158 - 33.402: 99.8043% ( 1) 00:22:18.992 33.402 - 33.646: 99.8152% ( 1) 00:22:18.992 33.646 - 33.890: 99.8369% ( 2) 00:22:18.992 33.890 - 34.133: 99.8478% ( 1) 00:22:18.992 34.133 - 34.377: 99.8804% ( 3) 00:22:18.992 34.377 - 34.621: 99.8913% ( 1) 00:22:18.992 35.352 - 35.596: 99.9022% ( 1) 00:22:18.992 36.084 - 36.328: 99.9130% ( 1) 00:22:18.992 36.815 - 37.059: 99.9239% ( 1) 00:22:18.992 37.547 - 37.790: 99.9348% ( 1) 00:22:18.992 38.522 - 38.766: 99.9456% ( 1) 00:22:18.992 40.960 - 41.204: 99.9565% ( 1) 00:22:18.992 41.935 - 42.179: 99.9674% ( 1) 00:22:18.992 47.787 - 48.030: 99.9783% ( 1) 00:22:18.992 48.518 - 48.762: 99.9891% ( 1) 00:22:18.992 49.981 - 50.225: 100.0000% ( 1) 00:22:18.992 00:22:18.992 00:22:18.992 real 0m1.363s 00:22:18.992 user 0m1.121s 00:22:18.992 sys 0m0.188s 00:22:18.992 ************************************ 00:22:18.992 END TEST nvme_overhead 00:22:18.992 18:49:47 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.992 18:49:47 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:22:18.992 ************************************ 00:22:18.992 18:49:47 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:22:18.992 18:49:47 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:22:18.992 18:49:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.992 18:49:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:18.992 ************************************ 00:22:18.992 START TEST nvme_arbitration 00:22:18.992 ************************************ 00:22:18.992 18:49:47 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:22:23.204 Initializing NVMe Controllers 00:22:23.204 Attached to 0000:00:10.0 00:22:23.204 Attached to 0000:00:11.0 00:22:23.204 Attached to 0000:00:13.0 00:22:23.204 Attached to 0000:00:12.0 00:22:23.204 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:22:23.204 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:22:23.204 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:22:23.204 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:22:23.204 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:22:23.204 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:22:23.204 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:22:23.204 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:22:23.204 Initialization complete. Launching workers. 00:22:23.204 Starting thread on core 1 with urgent priority queue 00:22:23.204 Starting thread on core 2 with urgent priority queue 00:22:23.204 Starting thread on core 0 with urgent priority queue 00:22:23.204 Starting thread on core 3 with urgent priority queue 00:22:23.204 QEMU NVMe Ctrl (12340 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:22:23.204 QEMU NVMe Ctrl (12342 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:22:23.204 QEMU NVMe Ctrl (12341 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:22:23.204 QEMU NVMe Ctrl (12342 ) core 1: 448.00 IO/s 223.21 secs/100000 ios 00:22:23.204 QEMU NVMe Ctrl (12343 ) core 2: 512.00 IO/s 195.31 secs/100000 ios 00:22:23.204 QEMU NVMe Ctrl (12342 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:22:23.204 ======================================================== 00:22:23.204 00:22:23.204 00:22:23.204 real 0m3.517s 00:22:23.204 user 0m9.458s 00:22:23.204 sys 0m0.220s 00:22:23.204 18:49:51 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.204 18:49:51 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:22:23.204 ************************************ 00:22:23.204 END TEST nvme_arbitration 00:22:23.204 ************************************ 00:22:23.204 18:49:51 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:22:23.204 18:49:51 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:23.204 18:49:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.204 18:49:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:23.204 ************************************ 00:22:23.204 START TEST nvme_single_aen 00:22:23.204 ************************************ 00:22:23.204 18:49:51 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:22:23.204 Asynchronous Event Request test 00:22:23.204 Attached to 0000:00:10.0 00:22:23.204 Attached to 0000:00:11.0 00:22:23.204 Attached to 0000:00:13.0 00:22:23.204 Attached to 0000:00:12.0 00:22:23.204 Reset controller to setup AER completions for this process 00:22:23.204 Registering asynchronous event callbacks... 00:22:23.204 Getting orig temperature thresholds of all controllers 00:22:23.204 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:23.204 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:23.204 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:23.204 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:22:23.204 Setting all controllers temperature threshold low to trigger AER 00:22:23.204 Waiting for all controllers temperature threshold to be set lower 00:22:23.204 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:23.204 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:22:23.204 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:23.204 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:22:23.204 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:23.204 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:22:23.204 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:22:23.204 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:22:23.204 Waiting for all controllers to trigger AER and reset threshold 00:22:23.204 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:23.204 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:23.204 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:23.204 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:22:23.204 Cleaning up... 00:22:23.204 00:22:23.204 real 0m0.394s 00:22:23.204 user 0m0.135s 00:22:23.204 sys 0m0.209s 00:22:23.204 18:49:51 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.204 ************************************ 00:22:23.204 END TEST nvme_single_aen 00:22:23.204 ************************************ 00:22:23.204 18:49:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:22:23.204 18:49:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:22:23.204 18:49:51 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:23.204 18:49:51 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.204 18:49:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:23.204 ************************************ 00:22:23.204 START TEST nvme_doorbell_aers 00:22:23.204 ************************************ 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:23.204 18:49:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:22:23.462 [2024-10-08 18:49:52.126241] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:22:33.508 Executing: test_write_invalid_db 00:22:33.508 Waiting for AER completion... 00:22:33.509 Failure: test_write_invalid_db 00:22:33.509 00:22:33.509 Executing: test_invalid_db_write_overflow_sq 00:22:33.509 Waiting for AER completion... 00:22:33.509 Failure: test_invalid_db_write_overflow_sq 00:22:33.509 00:22:33.509 Executing: test_invalid_db_write_overflow_cq 00:22:33.509 Waiting for AER completion... 00:22:33.509 Failure: test_invalid_db_write_overflow_cq 00:22:33.509 00:22:33.509 18:50:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:33.509 18:50:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:22:33.509 [2024-10-08 18:50:02.091330] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:22:43.476 Executing: test_write_invalid_db 00:22:43.476 Waiting for AER completion... 00:22:43.476 Failure: test_write_invalid_db 00:22:43.476 00:22:43.476 Executing: test_invalid_db_write_overflow_sq 00:22:43.476 Waiting for AER completion... 00:22:43.476 Failure: test_invalid_db_write_overflow_sq 00:22:43.476 00:22:43.476 Executing: test_invalid_db_write_overflow_cq 00:22:43.476 Waiting for AER completion... 00:22:43.476 Failure: test_invalid_db_write_overflow_cq 00:22:43.476 00:22:43.476 18:50:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:43.476 18:50:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:43.734 [2024-10-08 18:50:12.238009] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:22:53.744 Executing: test_write_invalid_db 00:22:53.744 Waiting for AER completion... 00:22:53.744 Failure: test_write_invalid_db 00:22:53.744 00:22:53.744 Executing: test_invalid_db_write_overflow_sq 00:22:53.744 Waiting for AER completion... 00:22:53.744 Failure: test_invalid_db_write_overflow_sq 00:22:53.744 00:22:53.744 Executing: test_invalid_db_write_overflow_cq 00:22:53.744 Waiting for AER completion... 00:22:53.744 Failure: test_invalid_db_write_overflow_cq 00:22:53.744 00:22:53.744 18:50:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:22:53.744 18:50:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:22:53.744 [2024-10-08 18:50:22.264452] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 Executing: test_write_invalid_db 00:23:03.716 Waiting for AER completion... 00:23:03.716 Failure: test_write_invalid_db 00:23:03.716 00:23:03.716 Executing: test_invalid_db_write_overflow_sq 00:23:03.716 Waiting for AER completion... 00:23:03.716 Failure: test_invalid_db_write_overflow_sq 00:23:03.716 00:23:03.716 Executing: test_invalid_db_write_overflow_cq 00:23:03.716 Waiting for AER completion... 00:23:03.716 Failure: test_invalid_db_write_overflow_cq 00:23:03.716 00:23:03.716 00:23:03.716 real 0m40.304s 00:23:03.716 user 0m28.463s 00:23:03.716 sys 0m11.374s 00:23:03.716 18:50:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.716 ************************************ 00:23:03.716 END TEST nvme_doorbell_aers 00:23:03.716 ************************************ 00:23:03.716 18:50:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:23:03.716 18:50:32 nvme -- nvme/nvme.sh@97 -- # uname 00:23:03.716 18:50:32 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:23:03.716 18:50:32 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:23:03.716 18:50:32 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:23:03.716 18:50:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.716 18:50:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:03.716 ************************************ 00:23:03.716 START TEST nvme_multi_aen 00:23:03.716 ************************************ 00:23:03.716 18:50:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:23:03.716 [2024-10-08 18:50:32.315780] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.315910] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.315946] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.318266] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.318512] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.318544] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.320413] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.320468] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.320491] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.322408] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.322464] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 [2024-10-08 18:50:32.322487] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65572) is not found. Dropping the request. 00:23:03.716 Child process pid: 66089 00:23:03.975 [Child] Asynchronous Event Request test 00:23:03.975 [Child] Attached to 0000:00:10.0 00:23:03.975 [Child] Attached to 0000:00:11.0 00:23:03.975 [Child] Attached to 0000:00:13.0 00:23:03.975 [Child] Attached to 0000:00:12.0 00:23:03.975 [Child] Registering asynchronous event callbacks... 00:23:03.975 [Child] Getting orig temperature thresholds of all controllers 00:23:03.975 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:03.975 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:03.975 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:03.975 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:03.975 [Child] Waiting for all controllers to trigger AER and reset threshold 00:23:03.975 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:03.975 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:03.975 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:03.975 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:03.975 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:03.975 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:03.975 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:03.975 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:03.975 [Child] Cleaning up... 00:23:03.975 Asynchronous Event Request test 00:23:03.975 Attached to 0000:00:10.0 00:23:03.975 Attached to 0000:00:11.0 00:23:03.975 Attached to 0000:00:13.0 00:23:03.975 Attached to 0000:00:12.0 00:23:03.975 Reset controller to setup AER completions for this process 00:23:03.975 Registering asynchronous event callbacks... 00:23:03.975 Getting orig temperature thresholds of all controllers 00:23:03.975 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:03.975 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:03.975 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:03.975 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:23:03.975 Setting all controllers temperature threshold low to trigger AER 00:23:03.975 Waiting for all controllers temperature threshold to be set lower 00:23:03.975 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:03.975 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:23:03.975 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:03.975 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:23:03.975 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:03.975 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:23:03.975 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:23:03.975 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:23:03.975 Waiting for all controllers to trigger AER and reset threshold 00:23:03.975 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:03.975 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:03.975 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:03.975 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:23:03.975 Cleaning up... 00:23:04.233 00:23:04.233 real 0m0.718s 00:23:04.233 user 0m0.277s 00:23:04.233 sys 0m0.323s 00:23:04.233 18:50:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:04.233 18:50:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:23:04.233 ************************************ 00:23:04.233 END TEST nvme_multi_aen 00:23:04.233 ************************************ 00:23:04.233 18:50:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:23:04.233 18:50:32 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:04.233 18:50:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:04.233 18:50:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.233 ************************************ 00:23:04.233 START TEST nvme_startup 00:23:04.233 ************************************ 00:23:04.233 18:50:32 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:23:04.492 Initializing NVMe Controllers 00:23:04.492 Attached to 0000:00:10.0 00:23:04.492 Attached to 0000:00:11.0 00:23:04.492 Attached to 0000:00:13.0 00:23:04.492 Attached to 0000:00:12.0 00:23:04.492 Initialization complete. 00:23:04.492 Time used:204287.828 (us). 00:23:04.492 00:23:04.492 real 0m0.305s 00:23:04.492 user 0m0.102s 00:23:04.492 sys 0m0.153s 00:23:04.492 18:50:33 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:04.492 ************************************ 00:23:04.492 END TEST nvme_startup 00:23:04.492 ************************************ 00:23:04.492 18:50:33 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:23:04.492 18:50:33 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:23:04.492 18:50:33 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:04.492 18:50:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:04.492 18:50:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.492 ************************************ 00:23:04.492 START TEST nvme_multi_secondary 00:23:04.492 ************************************ 00:23:04.492 18:50:33 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:23:04.492 18:50:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=66145 00:23:04.492 18:50:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:23:04.492 18:50:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=66146 00:23:04.492 18:50:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:23:04.492 18:50:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:23:08.681 Initializing NVMe Controllers 00:23:08.681 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:08.681 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:08.681 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:08.681 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:08.681 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:23:08.681 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:23:08.681 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:23:08.681 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:23:08.681 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:23:08.681 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:23:08.681 Initialization complete. Launching workers. 00:23:08.681 ======================================================== 00:23:08.681 Latency(us) 00:23:08.681 Device Information : IOPS MiB/s Average min max 00:23:08.681 PCIE (0000:00:10.0) NSID 1 from core 2: 2267.59 8.86 7054.14 1524.08 19031.37 00:23:08.681 PCIE (0000:00:11.0) NSID 1 from core 2: 2267.59 8.86 7055.93 1637.14 16336.22 00:23:08.681 PCIE (0000:00:13.0) NSID 1 from core 2: 2267.59 8.86 7055.55 1577.10 15410.73 00:23:08.681 PCIE (0000:00:12.0) NSID 1 from core 2: 2267.59 8.86 7055.13 1626.55 14603.23 00:23:08.681 PCIE (0000:00:12.0) NSID 2 from core 2: 2267.59 8.86 7056.16 1587.01 20054.90 00:23:08.681 PCIE (0000:00:12.0) NSID 3 from core 2: 2267.59 8.86 7055.78 1575.78 19458.23 00:23:08.681 ======================================================== 00:23:08.681 Total : 13605.51 53.15 7055.45 1524.08 20054.90 00:23:08.681 00:23:08.681 Initializing NVMe Controllers 00:23:08.681 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:08.681 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:08.681 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:08.681 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:08.681 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:23:08.681 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:23:08.681 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:23:08.681 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:23:08.681 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:23:08.681 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:23:08.681 Initialization complete. Launching workers. 00:23:08.681 ======================================================== 00:23:08.681 Latency(us) 00:23:08.681 Device Information : IOPS MiB/s Average min max 00:23:08.681 PCIE (0000:00:10.0) NSID 1 from core 1: 4868.57 19.02 3284.42 1235.93 7161.50 00:23:08.681 PCIE (0000:00:11.0) NSID 1 from core 1: 4868.57 19.02 3285.85 1215.26 7277.78 00:23:08.681 PCIE (0000:00:13.0) NSID 1 from core 1: 4868.57 19.02 3285.70 1233.59 7119.89 00:23:08.681 PCIE (0000:00:12.0) NSID 1 from core 1: 4868.57 19.02 3285.56 1232.97 7438.44 00:23:08.681 PCIE (0000:00:12.0) NSID 2 from core 1: 4868.57 19.02 3285.56 1243.70 8105.01 00:23:08.681 PCIE (0000:00:12.0) NSID 3 from core 1: 4868.57 19.02 3285.47 1235.49 7902.89 00:23:08.681 ======================================================== 00:23:08.681 Total : 29211.39 114.11 3285.43 1215.26 8105.01 00:23:08.681 00:23:08.681 18:50:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 66145 00:23:10.057 Initializing NVMe Controllers 00:23:10.057 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:10.057 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:10.057 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:10.057 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:10.057 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:10.057 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:23:10.057 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:23:10.057 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:23:10.057 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:23:10.057 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:23:10.057 Initialization complete. Launching workers. 00:23:10.057 ======================================================== 00:23:10.058 Latency(us) 00:23:10.058 Device Information : IOPS MiB/s Average min max 00:23:10.058 PCIE (0000:00:10.0) NSID 1 from core 0: 7256.13 28.34 2203.38 1003.52 12063.43 00:23:10.058 PCIE (0000:00:11.0) NSID 1 from core 0: 7256.13 28.34 2204.46 1030.25 11776.25 00:23:10.058 PCIE (0000:00:13.0) NSID 1 from core 0: 7256.13 28.34 2204.39 1029.04 11761.39 00:23:10.058 PCIE (0000:00:12.0) NSID 1 from core 0: 7256.13 28.34 2204.31 1012.50 11667.25 00:23:10.058 PCIE (0000:00:12.0) NSID 2 from core 0: 7256.13 28.34 2204.24 1047.09 11162.14 00:23:10.058 PCIE (0000:00:12.0) NSID 3 from core 0: 7256.13 28.34 2204.18 1047.79 11622.58 00:23:10.058 ======================================================== 00:23:10.058 Total : 43536.77 170.07 2204.16 1003.52 12063.43 00:23:10.058 00:23:10.058 18:50:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 66146 00:23:10.058 18:50:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=66214 00:23:10.058 18:50:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=66215 00:23:10.058 18:50:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:23:10.058 18:50:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:23:10.058 18:50:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:23:13.380 Initializing NVMe Controllers 00:23:13.380 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:13.380 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:13.380 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:13.380 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:13.380 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:13.380 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:23:13.380 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:23:13.380 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:23:13.380 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:23:13.380 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:23:13.380 Initialization complete. Launching workers. 00:23:13.380 ======================================================== 00:23:13.380 Latency(us) 00:23:13.380 Device Information : IOPS MiB/s Average min max 00:23:13.380 PCIE (0000:00:10.0) NSID 1 from core 0: 4613.65 18.02 3465.59 1251.79 13056.20 00:23:13.380 PCIE (0000:00:11.0) NSID 1 from core 0: 4613.65 18.02 3467.03 1225.14 11912.18 00:23:13.380 PCIE (0000:00:13.0) NSID 1 from core 0: 4613.65 18.02 3466.76 1336.00 11705.06 00:23:13.381 PCIE (0000:00:12.0) NSID 1 from core 0: 4613.65 18.02 3466.46 1432.80 12214.40 00:23:13.381 PCIE (0000:00:12.0) NSID 2 from core 0: 4613.65 18.02 3466.22 1391.15 12061.93 00:23:13.381 PCIE (0000:00:12.0) NSID 3 from core 0: 4613.65 18.02 3466.02 1296.00 12612.04 00:23:13.381 ======================================================== 00:23:13.381 Total : 27681.91 108.13 3466.35 1225.14 13056.20 00:23:13.381 00:23:13.640 Initializing NVMe Controllers 00:23:13.640 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:13.640 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:13.640 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:13.640 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:13.640 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:23:13.640 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:23:13.640 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:23:13.640 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:23:13.640 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:23:13.640 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:23:13.640 Initialization complete. Launching workers. 00:23:13.640 ======================================================== 00:23:13.640 Latency(us) 00:23:13.640 Device Information : IOPS MiB/s Average min max 00:23:13.640 PCIE (0000:00:10.0) NSID 1 from core 1: 5044.88 19.71 3169.57 1073.58 9094.93 00:23:13.640 PCIE (0000:00:11.0) NSID 1 from core 1: 5044.88 19.71 3170.98 1098.92 9338.32 00:23:13.640 PCIE (0000:00:13.0) NSID 1 from core 1: 5044.88 19.71 3170.80 1085.74 9383.09 00:23:13.640 PCIE (0000:00:12.0) NSID 1 from core 1: 5044.88 19.71 3170.68 1103.13 9881.39 00:23:13.640 PCIE (0000:00:12.0) NSID 2 from core 1: 5044.88 19.71 3170.68 1085.93 10205.90 00:23:13.640 PCIE (0000:00:12.0) NSID 3 from core 1: 5044.88 19.71 3170.53 1094.99 10116.38 00:23:13.640 ======================================================== 00:23:13.640 Total : 30269.28 118.24 3170.54 1073.58 10205.90 00:23:13.640 00:23:15.543 Initializing NVMe Controllers 00:23:15.543 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:15.543 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:23:15.543 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:23:15.543 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:23:15.543 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:23:15.543 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:23:15.543 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:23:15.543 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:23:15.543 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:23:15.543 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:23:15.543 Initialization complete. Launching workers. 00:23:15.543 ======================================================== 00:23:15.543 Latency(us) 00:23:15.543 Device Information : IOPS MiB/s Average min max 00:23:15.543 PCIE (0000:00:10.0) NSID 1 from core 2: 3190.36 12.46 5013.41 1072.99 17423.44 00:23:15.543 PCIE (0000:00:11.0) NSID 1 from core 2: 3190.36 12.46 5014.72 1090.41 17706.97 00:23:15.543 PCIE (0000:00:13.0) NSID 1 from core 2: 3190.36 12.46 5014.73 1103.93 18178.67 00:23:15.543 PCIE (0000:00:12.0) NSID 1 from core 2: 3190.36 12.46 5014.70 1079.45 18451.84 00:23:15.543 PCIE (0000:00:12.0) NSID 2 from core 2: 3190.36 12.46 5014.69 1074.66 16857.25 00:23:15.543 PCIE (0000:00:12.0) NSID 3 from core 2: 3193.56 12.47 5009.38 1085.86 15077.88 00:23:15.543 ======================================================== 00:23:15.543 Total : 19145.34 74.79 5013.60 1072.99 18451.84 00:23:15.543 00:23:15.543 ************************************ 00:23:15.543 END TEST nvme_multi_secondary 00:23:15.543 ************************************ 00:23:15.543 18:50:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 66214 00:23:15.543 18:50:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 66215 00:23:15.543 00:23:15.543 real 0m10.855s 00:23:15.543 user 0m18.691s 00:23:15.543 sys 0m1.249s 00:23:15.543 18:50:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.543 18:50:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:23:15.543 18:50:44 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:23:15.543 18:50:44 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:23:15.543 18:50:44 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/65143 ]] 00:23:15.543 18:50:44 nvme -- common/autotest_common.sh@1090 -- # kill 65143 00:23:15.543 18:50:44 nvme -- common/autotest_common.sh@1091 -- # wait 65143 00:23:15.543 [2024-10-08 18:50:44.051311] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.051398] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.051448] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.051484] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.054059] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.054306] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.054539] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.054708] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.057343] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.057573] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.057735] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.057898] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.543 [2024-10-08 18:50:44.060417] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.544 [2024-10-08 18:50:44.060650] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.544 [2024-10-08 18:50:44.060852] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.544 [2024-10-08 18:50:44.061106] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 66088) is not found. Dropping the request. 00:23:15.802 [2024-10-08 18:50:44.408373] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:23:15.802 18:50:44 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:23:15.802 18:50:44 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:23:15.802 18:50:44 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:23:15.802 18:50:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:15.802 18:50:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.802 18:50:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 ************************************ 00:23:15.802 START TEST bdev_nvme_reset_stuck_adm_cmd 00:23:15.802 ************************************ 00:23:15.802 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:23:15.802 * Looking for test storage... 00:23:15.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:15.802 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:15.803 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:23:15.803 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:16.061 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:16.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.062 --rc genhtml_branch_coverage=1 00:23:16.062 --rc genhtml_function_coverage=1 00:23:16.062 --rc genhtml_legend=1 00:23:16.062 --rc geninfo_all_blocks=1 00:23:16.062 --rc geninfo_unexecuted_blocks=1 00:23:16.062 00:23:16.062 ' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:16.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.062 --rc genhtml_branch_coverage=1 00:23:16.062 --rc genhtml_function_coverage=1 00:23:16.062 --rc genhtml_legend=1 00:23:16.062 --rc geninfo_all_blocks=1 00:23:16.062 --rc geninfo_unexecuted_blocks=1 00:23:16.062 00:23:16.062 ' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:16.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.062 --rc genhtml_branch_coverage=1 00:23:16.062 --rc genhtml_function_coverage=1 00:23:16.062 --rc genhtml_legend=1 00:23:16.062 --rc geninfo_all_blocks=1 00:23:16.062 --rc geninfo_unexecuted_blocks=1 00:23:16.062 00:23:16.062 ' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:16.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.062 --rc genhtml_branch_coverage=1 00:23:16.062 --rc genhtml_function_coverage=1 00:23:16.062 --rc genhtml_legend=1 00:23:16.062 --rc geninfo_all_blocks=1 00:23:16.062 --rc geninfo_unexecuted_blocks=1 00:23:16.062 00:23:16.062 ' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=66379 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 66379 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 66379 ']' 00:23:16.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.062 18:50:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:16.320 [2024-10-08 18:50:44.921536] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:23:16.321 [2024-10-08 18:50:44.922002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66379 ] 00:23:16.579 [2024-10-08 18:50:45.153653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:16.838 [2024-10-08 18:50:45.545655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.838 [2024-10-08 18:50:45.545749] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.838 [2024-10-08 18:50:45.545822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.838 [2024-10-08 18:50:45.545838] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:18.213 nvme0n1 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_O5VIH.txt 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:18.213 true 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728413446 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=66413 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:18.213 18:50:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:23:20.113 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:20.113 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.113 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:20.113 [2024-10-08 18:50:48.846011] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:23:20.113 [2024-10-08 18:50:48.848647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.113 [2024-10-08 18:50:48.848793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:23:20.113 [2024-10-08 18:50:48.848916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-10-08 18:50:48.851102] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:20.113 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.113 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 66413 00:23:20.113 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 66413 00:23:20.113 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 66413 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_O5VIH.txt 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_O5VIH.txt 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 66379 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 66379 ']' 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 66379 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66379 00:23:20.372 killing process with pid 66379 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66379' 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 66379 00:23:20.372 18:50:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 66379 00:23:23.656 18:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:23:23.656 18:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:23:23.656 00:23:23.656 real 0m7.579s 00:23:23.656 user 0m25.632s 00:23:23.656 sys 0m0.874s 00:23:23.656 ************************************ 00:23:23.656 END TEST bdev_nvme_reset_stuck_adm_cmd 00:23:23.656 ************************************ 00:23:23.656 18:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:23.656 18:50:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:23:23.656 18:50:52 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:23:23.656 18:50:52 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:23:23.656 18:50:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:23.656 18:50:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:23.656 18:50:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:23.656 ************************************ 00:23:23.656 START TEST nvme_fio 00:23:23.656 ************************************ 00:23:23.656 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:23:23.656 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:23:23.656 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:23:23.656 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:23:23.657 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:23:23.657 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:23:23.657 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:23.657 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:23.657 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:23:23.657 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:23:23.657 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:23:23.657 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:23:23.657 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:23:23.657 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:23.657 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:23.657 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:23.915 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:23.915 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:24.174 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:24.174 18:50:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:24.174 18:50:52 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:23:24.432 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:24.432 fio-3.35 00:23:24.432 Starting 1 thread 00:23:28.688 00:23:28.688 test: (groupid=0, jobs=1): err= 0: pid=66586: Tue Oct 8 18:50:57 2024 00:23:28.688 read: IOPS=15.7k, BW=61.5MiB/s (64.5MB/s)(123MiB/2001msec) 00:23:28.688 slat (nsec): min=4493, max=71615, avg=6657.04, stdev=2529.92 00:23:28.688 clat (usec): min=333, max=10635, avg=4045.25, stdev=968.79 00:23:28.688 lat (usec): min=342, max=10643, avg=4051.91, stdev=970.15 00:23:28.688 clat percentiles (usec): 00:23:28.688 | 1.00th=[ 2278], 5.00th=[ 2868], 10.00th=[ 3097], 20.00th=[ 3326], 00:23:28.688 | 30.00th=[ 3490], 40.00th=[ 3818], 50.00th=[ 4015], 60.00th=[ 4146], 00:23:28.688 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 6390], 00:23:28.688 | 99.00th=[ 7111], 99.50th=[ 8356], 99.90th=[ 9634], 99.95th=[10028], 00:23:28.688 | 99.99th=[10290] 00:23:28.688 bw ( KiB/s): min=61856, max=65696, per=100.00%, avg=63880.00, stdev=1928.43, samples=3 00:23:28.688 iops : min=15464, max=16424, avg=15970.00, stdev=482.11, samples=3 00:23:28.688 write: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2001msec); 0 zone resets 00:23:28.688 slat (nsec): min=4654, max=68492, avg=6917.71, stdev=2682.85 00:23:28.688 clat (usec): min=383, max=10496, avg=4050.37, stdev=970.90 00:23:28.688 lat (usec): min=392, max=10541, avg=4057.29, stdev=972.34 00:23:28.688 clat percentiles (usec): 00:23:28.688 | 1.00th=[ 2278], 5.00th=[ 2868], 10.00th=[ 3097], 20.00th=[ 3326], 00:23:28.688 | 30.00th=[ 3490], 40.00th=[ 3851], 50.00th=[ 4047], 60.00th=[ 4146], 00:23:28.688 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 6390], 00:23:28.688 | 99.00th=[ 7046], 99.50th=[ 8160], 99.90th=[ 9503], 99.95th=[10028], 00:23:28.688 | 99.99th=[10421] 00:23:28.688 bw ( KiB/s): min=62232, max=64992, per=100.00%, avg=63576.00, stdev=1381.41, samples=3 00:23:28.688 iops : min=15558, max=16248, avg=15894.00, stdev=345.35, samples=3 00:23:28.688 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:23:28.688 lat (msec) : 2=0.36%, 4=47.60%, 10=51.96%, 20=0.06% 00:23:28.688 cpu : usr=99.00%, sys=0.15%, ctx=5, majf=0, minf=607 00:23:28.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:28.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:28.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:28.688 issued rwts: total=31504,31535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:28.688 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:28.688 00:23:28.688 Run status group 0 (all jobs): 00:23:28.688 READ: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=123MiB (129MB), run=2001-2001msec 00:23:28.688 WRITE: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2001-2001msec 00:23:28.955 ----------------------------------------------------- 00:23:28.955 Suppressions used: 00:23:28.955 count bytes template 00:23:28.955 1 32 /usr/src/fio/parse.c 00:23:28.955 1 8 libtcmalloc_minimal.so 00:23:28.955 ----------------------------------------------------- 00:23:28.955 00:23:28.955 18:50:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:28.955 18:50:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:28.955 18:50:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:23:28.955 18:50:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:29.215 18:50:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:23:29.215 18:50:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:29.473 18:50:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:29.473 18:50:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:23:29.473 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:23:29.473 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:29.473 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:29.473 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:29.473 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:29.473 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:23:29.473 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:29.473 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:29.731 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:29.731 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:23:29.731 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:29.731 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:29.731 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:29.731 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:23:29.731 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:29.731 18:50:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:23:29.731 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:29.731 fio-3.35 00:23:29.731 Starting 1 thread 00:23:33.928 00:23:33.928 test: (groupid=0, jobs=1): err= 0: pid=66652: Tue Oct 8 18:51:02 2024 00:23:33.928 read: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(121MiB/2001msec) 00:23:33.928 slat (nsec): min=4409, max=95446, avg=6536.22, stdev=1987.50 00:23:33.928 clat (usec): min=280, max=9730, avg=4107.12, stdev=1006.97 00:23:33.928 lat (usec): min=286, max=9740, avg=4113.66, stdev=1007.82 00:23:33.928 clat percentiles (usec): 00:23:33.928 | 1.00th=[ 1795], 5.00th=[ 2245], 10.00th=[ 2573], 20.00th=[ 3326], 00:23:33.928 | 30.00th=[ 3884], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4359], 00:23:33.928 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5473], 00:23:33.928 | 99.00th=[ 6915], 99.50th=[ 7570], 99.90th=[ 8586], 99.95th=[ 8717], 00:23:33.928 | 99.99th=[ 9372] 00:23:33.928 bw ( KiB/s): min=53288, max=77485, per=100.00%, avg=63292.33, stdev=12630.53, samples=3 00:23:33.928 iops : min=13322, max=19371, avg=15823.00, stdev=3157.49, samples=3 00:23:33.928 write: IOPS=15.5k, BW=60.6MiB/s (63.6MB/s)(121MiB/2001msec); 0 zone resets 00:23:33.928 slat (usec): min=4, max=106, avg= 6.74, stdev= 2.00 00:23:33.928 clat (usec): min=315, max=9621, avg=4105.67, stdev=1015.75 00:23:33.928 lat (usec): min=321, max=9627, avg=4112.41, stdev=1016.58 00:23:33.928 clat percentiles (usec): 00:23:33.928 | 1.00th=[ 1778], 5.00th=[ 2245], 10.00th=[ 2573], 20.00th=[ 3326], 00:23:33.928 | 30.00th=[ 3884], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4293], 00:23:33.928 | 70.00th=[ 4621], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5473], 00:23:33.928 | 99.00th=[ 6915], 99.50th=[ 7570], 99.90th=[ 8586], 99.95th=[ 8717], 00:23:33.928 | 99.99th=[ 9110] 00:23:33.928 bw ( KiB/s): min=52424, max=77341, per=100.00%, avg=62983.00, stdev=12885.59, samples=3 00:23:33.928 iops : min=13106, max=19335, avg=15745.67, stdev=3221.26, samples=3 00:23:33.928 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:23:33.928 lat (msec) : 2=2.34%, 4=33.09%, 10=64.52% 00:23:33.928 cpu : usr=98.85%, sys=0.05%, ctx=2, majf=0, minf=607 00:23:33.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:33.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:33.928 issued rwts: total=31046,31057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:33.928 00:23:33.928 Run status group 0 (all jobs): 00:23:33.928 READ: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=121MiB (127MB), run=2001-2001msec 00:23:33.928 WRITE: bw=60.6MiB/s (63.6MB/s), 60.6MiB/s-60.6MiB/s (63.6MB/s-63.6MB/s), io=121MiB (127MB), run=2001-2001msec 00:23:33.928 ----------------------------------------------------- 00:23:33.928 Suppressions used: 00:23:33.928 count bytes template 00:23:33.928 1 32 /usr/src/fio/parse.c 00:23:33.928 1 8 libtcmalloc_minimal.so 00:23:33.928 ----------------------------------------------------- 00:23:33.928 00:23:33.928 18:51:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:33.928 18:51:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:33.928 18:51:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:33.928 18:51:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:23:34.186 18:51:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:23:34.186 18:51:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:34.444 18:51:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:34.444 18:51:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:34.444 18:51:03 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:23:34.703 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:34.703 fio-3.35 00:23:34.703 Starting 1 thread 00:23:37.988 00:23:37.988 test: (groupid=0, jobs=1): err= 0: pid=66720: Tue Oct 8 18:51:06 2024 00:23:37.988 read: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(138MiB/2001msec) 00:23:37.988 slat (usec): min=4, max=295, avg= 5.82, stdev= 2.32 00:23:37.988 clat (usec): min=313, max=9142, avg=3612.66, stdev=538.96 00:23:37.988 lat (usec): min=320, max=9156, avg=3618.48, stdev=539.72 00:23:37.988 clat percentiles (usec): 00:23:37.988 | 1.00th=[ 2966], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3195], 00:23:37.988 | 30.00th=[ 3228], 40.00th=[ 3294], 50.00th=[ 3359], 60.00th=[ 3752], 00:23:37.988 | 70.00th=[ 3949], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:23:37.988 | 99.00th=[ 5145], 99.50th=[ 5800], 99.90th=[ 7177], 99.95th=[ 7635], 00:23:37.988 | 99.99th=[ 8848] 00:23:37.988 bw ( KiB/s): min=67216, max=78992, per=100.00%, avg=73192.00, stdev=5889.97, samples=3 00:23:37.988 iops : min=16804, max=19748, avg=18298.00, stdev=1472.49, samples=3 00:23:37.988 write: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(138MiB/2001msec); 0 zone resets 00:23:37.988 slat (nsec): min=4564, max=90750, avg=5997.58, stdev=1713.10 00:23:37.988 clat (usec): min=288, max=9489, avg=3621.58, stdev=544.82 00:23:37.988 lat (usec): min=294, max=9536, avg=3627.58, stdev=545.57 00:23:37.988 clat percentiles (usec): 00:23:37.988 | 1.00th=[ 2966], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3195], 00:23:37.988 | 30.00th=[ 3228], 40.00th=[ 3294], 50.00th=[ 3359], 60.00th=[ 3785], 00:23:37.988 | 70.00th=[ 3949], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:23:37.988 | 99.00th=[ 5211], 99.50th=[ 5932], 99.90th=[ 7242], 99.95th=[ 7832], 00:23:37.988 | 99.99th=[ 9110] 00:23:37.988 bw ( KiB/s): min=67384, max=79024, per=100.00%, avg=73050.67, stdev=5826.06, samples=3 00:23:37.988 iops : min=16846, max=19756, avg=18262.67, stdev=1456.51, samples=3 00:23:37.988 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:23:37.988 lat (msec) : 2=0.05%, 4=73.19%, 10=26.73% 00:23:37.988 cpu : usr=99.15%, sys=0.05%, ctx=4, majf=0, minf=607 00:23:37.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:37.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:37.988 issued rwts: total=35278,35286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:37.988 00:23:37.988 Run status group 0 (all jobs): 00:23:37.988 READ: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=138MiB (144MB), run=2001-2001msec 00:23:37.988 WRITE: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=138MiB (145MB), run=2001-2001msec 00:23:38.248 ----------------------------------------------------- 00:23:38.248 Suppressions used: 00:23:38.248 count bytes template 00:23:38.248 1 32 /usr/src/fio/parse.c 00:23:38.248 1 8 libtcmalloc_minimal.so 00:23:38.248 ----------------------------------------------------- 00:23:38.248 00:23:38.248 18:51:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:38.248 18:51:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:23:38.248 18:51:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:23:38.248 18:51:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:23:38.506 18:51:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:23:38.506 18:51:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:23:38.764 18:51:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:23:38.764 18:51:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:38.764 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:23:39.023 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:39.023 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:39.023 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:23:39.023 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:23:39.023 18:51:07 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:23:39.023 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:39.023 fio-3.35 00:23:39.023 Starting 1 thread 00:23:44.302 00:23:44.302 test: (groupid=0, jobs=1): err= 0: pid=66786: Tue Oct 8 18:51:12 2024 00:23:44.302 read: IOPS=15.4k, BW=60.3MiB/s (63.3MB/s)(121MiB/2001msec) 00:23:44.302 slat (nsec): min=4543, max=86755, avg=6755.23, stdev=2402.72 00:23:44.302 clat (usec): min=322, max=10343, avg=4124.61, stdev=974.73 00:23:44.302 lat (usec): min=329, max=10355, avg=4131.36, stdev=975.68 00:23:44.302 clat percentiles (usec): 00:23:44.302 | 1.00th=[ 2089], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3392], 00:23:44.302 | 30.00th=[ 3720], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:23:44.302 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5735], 00:23:44.302 | 99.00th=[ 8225], 99.50th=[ 8717], 99.90th=[ 9896], 99.95th=[10028], 00:23:44.302 | 99.99th=[10290] 00:23:44.302 bw ( KiB/s): min=56662, max=64272, per=99.67%, avg=61567.33, stdev=4255.61, samples=3 00:23:44.302 iops : min=14165, max=16068, avg=15391.67, stdev=1064.19, samples=3 00:23:44.302 write: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)(121MiB/2001msec); 0 zone resets 00:23:44.302 slat (nsec): min=4674, max=66060, avg=6912.58, stdev=2236.45 00:23:44.302 clat (usec): min=224, max=10339, avg=4133.73, stdev=955.79 00:23:44.302 lat (usec): min=230, max=10351, avg=4140.64, stdev=956.72 00:23:44.302 clat percentiles (usec): 00:23:44.302 | 1.00th=[ 2114], 5.00th=[ 2999], 10.00th=[ 3228], 20.00th=[ 3392], 00:23:44.302 | 30.00th=[ 3752], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:23:44.302 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5735], 00:23:44.302 | 99.00th=[ 8160], 99.50th=[ 8717], 99.90th=[ 9896], 99.95th=[10159], 00:23:44.302 | 99.99th=[10290] 00:23:44.302 bw ( KiB/s): min=56982, max=63712, per=98.93%, avg=61143.33, stdev=3636.71, samples=3 00:23:44.302 iops : min=14245, max=15930, avg=15285.67, stdev=909.73, samples=3 00:23:44.302 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:23:44.302 lat (msec) : 2=0.75%, 4=35.02%, 10=64.10%, 20=0.07% 00:23:44.302 cpu : usr=98.80%, sys=0.20%, ctx=4, majf=0, minf=605 00:23:44.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:44.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:44.302 issued rwts: total=30901,30917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:44.302 00:23:44.302 Run status group 0 (all jobs): 00:23:44.302 READ: bw=60.3MiB/s (63.3MB/s), 60.3MiB/s-60.3MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:23:44.302 WRITE: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=121MiB (127MB), run=2001-2001msec 00:23:44.302 ----------------------------------------------------- 00:23:44.302 Suppressions used: 00:23:44.302 count bytes template 00:23:44.302 1 32 /usr/src/fio/parse.c 00:23:44.302 1 8 libtcmalloc_minimal.so 00:23:44.302 ----------------------------------------------------- 00:23:44.302 00:23:44.302 18:51:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:23:44.302 18:51:12 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:23:44.302 00:23:44.302 real 0m20.757s 00:23:44.302 user 0m15.768s 00:23:44.302 sys 0m5.353s 00:23:44.302 18:51:12 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.302 18:51:12 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:23:44.302 ************************************ 00:23:44.302 END TEST nvme_fio 00:23:44.302 ************************************ 00:23:44.302 ************************************ 00:23:44.302 END TEST nvme 00:23:44.302 ************************************ 00:23:44.302 00:23:44.302 real 1m38.265s 00:23:44.302 user 3m49.593s 00:23:44.302 sys 0m25.568s 00:23:44.302 18:51:12 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.302 18:51:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:23:44.302 18:51:12 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:23:44.302 18:51:12 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:23:44.302 18:51:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:44.302 18:51:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:44.302 18:51:12 -- common/autotest_common.sh@10 -- # set +x 00:23:44.302 ************************************ 00:23:44.302 START TEST nvme_scc 00:23:44.302 ************************************ 00:23:44.302 18:51:12 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:23:44.302 * Looking for test storage... 00:23:44.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:44.302 18:51:13 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:44.302 18:51:13 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:23:44.302 18:51:13 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:44.560 18:51:13 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.560 18:51:13 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@345 -- # : 1 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@368 -- # return 0 00:23:44.561 18:51:13 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.561 18:51:13 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:44.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.561 --rc genhtml_branch_coverage=1 00:23:44.561 --rc genhtml_function_coverage=1 00:23:44.561 --rc genhtml_legend=1 00:23:44.561 --rc geninfo_all_blocks=1 00:23:44.561 --rc geninfo_unexecuted_blocks=1 00:23:44.561 00:23:44.561 ' 00:23:44.561 18:51:13 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:44.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.561 --rc genhtml_branch_coverage=1 00:23:44.561 --rc genhtml_function_coverage=1 00:23:44.561 --rc genhtml_legend=1 00:23:44.561 --rc geninfo_all_blocks=1 00:23:44.561 --rc geninfo_unexecuted_blocks=1 00:23:44.561 00:23:44.561 ' 00:23:44.561 18:51:13 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:44.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.561 --rc genhtml_branch_coverage=1 00:23:44.561 --rc genhtml_function_coverage=1 00:23:44.561 --rc genhtml_legend=1 00:23:44.561 --rc geninfo_all_blocks=1 00:23:44.561 --rc geninfo_unexecuted_blocks=1 00:23:44.561 00:23:44.561 ' 00:23:44.561 18:51:13 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:44.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.561 --rc genhtml_branch_coverage=1 00:23:44.561 --rc genhtml_function_coverage=1 00:23:44.561 --rc genhtml_legend=1 00:23:44.561 --rc geninfo_all_blocks=1 00:23:44.561 --rc geninfo_unexecuted_blocks=1 00:23:44.561 00:23:44.561 ' 00:23:44.561 18:51:13 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.561 18:51:13 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.561 18:51:13 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.561 18:51:13 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.561 18:51:13 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.561 18:51:13 nvme_scc -- paths/export.sh@5 -- # export PATH 00:23:44.561 18:51:13 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:23:44.561 18:51:13 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:23:44.561 18:51:13 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:44.561 18:51:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:23:44.561 18:51:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:23:44.561 18:51:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:23:44.561 18:51:13 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:45.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:45.128 Waiting for block devices as requested 00:23:45.128 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:45.386 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:45.386 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:45.644 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:50.920 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:50.920 18:51:19 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:23:50.920 18:51:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:23:50.920 18:51:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:50.920 18:51:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:50.920 18:51:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:23:50.920 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.921 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.922 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.923 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.924 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:23:50.925 18:51:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:23:50.925 18:51:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:50.925 18:51:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:50.925 18:51:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:50.925 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.926 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.927 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:23:50.928 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:23:50.929 18:51:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:23:50.929 18:51:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:23:50.929 18:51:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:50.929 18:51:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:23:50.929 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:23:50.930 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:23:50.931 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:50.932 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.196 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.197 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.198 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:51.199 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.200 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:23:51.201 18:51:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:23:51.201 18:51:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:23:51.201 18:51:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:51.201 18:51:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:23:51.201 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.202 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:23:51.203 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:23:51.204 18:51:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:23:51.204 18:51:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:23:51.463 18:51:19 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:23:51.463 18:51:19 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:23:51.463 18:51:19 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:23:51.463 18:51:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:23:51.463 18:51:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:23:51.463 18:51:19 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:52.031 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:52.598 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.598 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.598 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.598 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:23:52.858 18:51:21 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:23:52.858 18:51:21 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:52.858 18:51:21 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:52.858 18:51:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:23:52.858 ************************************ 00:23:52.858 START TEST nvme_simple_copy 00:23:52.858 ************************************ 00:23:52.858 18:51:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:23:53.124 Initializing NVMe Controllers 00:23:53.125 Attaching to 0000:00:10.0 00:23:53.125 Controller supports SCC. Attached to 0000:00:10.0 00:23:53.125 Namespace ID: 1 size: 6GB 00:23:53.125 Initialization complete. 00:23:53.125 00:23:53.125 Controller QEMU NVMe Ctrl (12340 ) 00:23:53.125 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:23:53.125 Namespace Block Size:4096 00:23:53.125 Writing LBAs 0 to 63 with Random Data 00:23:53.125 Copied LBAs from 0 - 63 to the Destination LBA 256 00:23:53.125 LBAs matching Written Data: 64 00:23:53.125 00:23:53.125 ************************************ 00:23:53.125 END TEST nvme_simple_copy 00:23:53.125 ************************************ 00:23:53.125 real 0m0.366s 00:23:53.125 user 0m0.149s 00:23:53.125 sys 0m0.113s 00:23:53.125 18:51:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:53.125 18:51:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:23:53.125 ************************************ 00:23:53.125 END TEST nvme_scc 00:23:53.125 ************************************ 00:23:53.125 00:23:53.125 real 0m8.870s 00:23:53.125 user 0m1.574s 00:23:53.125 sys 0m2.226s 00:23:53.125 18:51:21 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:53.125 18:51:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:23:53.125 18:51:21 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:23:53.125 18:51:21 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:23:53.125 18:51:21 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:23:53.125 18:51:21 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:23:53.125 18:51:21 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:23:53.125 18:51:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:53.125 18:51:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:53.125 18:51:21 -- common/autotest_common.sh@10 -- # set +x 00:23:53.125 ************************************ 00:23:53.125 START TEST nvme_fdp 00:23:53.125 ************************************ 00:23:53.125 18:51:21 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:23:53.406 * Looking for test storage... 00:23:53.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:23:53.406 18:51:21 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:53.406 18:51:21 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:23:53.406 18:51:21 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:53.406 18:51:22 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:23:53.406 18:51:22 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.406 18:51:22 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.406 --rc genhtml_branch_coverage=1 00:23:53.406 --rc genhtml_function_coverage=1 00:23:53.406 --rc genhtml_legend=1 00:23:53.406 --rc geninfo_all_blocks=1 00:23:53.406 --rc geninfo_unexecuted_blocks=1 00:23:53.406 00:23:53.406 ' 00:23:53.406 18:51:22 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.406 --rc genhtml_branch_coverage=1 00:23:53.406 --rc genhtml_function_coverage=1 00:23:53.406 --rc genhtml_legend=1 00:23:53.406 --rc geninfo_all_blocks=1 00:23:53.406 --rc geninfo_unexecuted_blocks=1 00:23:53.406 00:23:53.406 ' 00:23:53.406 18:51:22 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.406 --rc genhtml_branch_coverage=1 00:23:53.406 --rc genhtml_function_coverage=1 00:23:53.406 --rc genhtml_legend=1 00:23:53.406 --rc geninfo_all_blocks=1 00:23:53.406 --rc geninfo_unexecuted_blocks=1 00:23:53.406 00:23:53.406 ' 00:23:53.406 18:51:22 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:53.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.406 --rc genhtml_branch_coverage=1 00:23:53.406 --rc genhtml_function_coverage=1 00:23:53.406 --rc genhtml_legend=1 00:23:53.406 --rc geninfo_all_blocks=1 00:23:53.406 --rc geninfo_unexecuted_blocks=1 00:23:53.406 00:23:53.406 ' 00:23:53.406 18:51:22 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:53.406 18:51:22 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:23:53.406 18:51:22 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:23:53.406 18:51:22 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:53.406 18:51:22 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.406 18:51:22 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.406 18:51:22 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.406 18:51:22 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.406 18:51:22 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.406 18:51:22 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:23:53.406 18:51:22 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.406 18:51:22 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:23:53.407 18:51:22 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:23:53.407 18:51:22 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:23:53.407 18:51:22 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:23:53.407 18:51:22 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:23:53.407 18:51:22 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:23:53.407 18:51:22 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:23:53.407 18:51:22 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:23:53.407 18:51:22 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:23:53.407 18:51:22 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.407 18:51:22 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:53.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:53.923 Waiting for block devices as requested 00:23:54.181 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:54.181 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:54.181 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:23:54.440 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:23:59.723 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:23:59.723 18:51:28 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:23:59.723 18:51:28 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:23:59.723 18:51:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:59.723 18:51:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:23:59.723 18:51:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:23:59.723 18:51:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:23:59.723 18:51:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:23:59.723 18:51:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:59.723 18:51:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:59.723 18:51:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:23:59.723 18:51:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.724 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.725 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:59.726 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.727 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.728 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:23:59.729 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:23:59.730 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:59.731 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:23:59.732 18:51:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:23:59.732 18:51:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:59.732 18:51:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:59.732 18:51:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:23:59.732 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.733 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.734 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.735 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:23:59.736 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.738 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.739 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:23:59.740 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.741 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.746 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.747 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:23:59.748 18:51:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:23:59.748 18:51:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:23:59.748 18:51:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:59.748 18:51:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:23:59.748 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:23:59.749 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:00.051 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.052 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:00.053 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:24:00.054 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.055 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:24:00.056 18:51:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.057 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:24:00.058 18:51:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:24:00.058 18:51:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:24:00.058 18:51:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:00.058 18:51:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:24:00.058 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.059 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.060 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:24:00.061 18:51:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:24:00.061 18:51:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:24:00.061 18:51:28 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:24:00.061 18:51:28 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:00.629 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:01.563 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.563 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.563 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.563 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.563 18:51:30 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:24:01.563 18:51:30 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:01.563 18:51:30 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:01.563 18:51:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:24:01.563 ************************************ 00:24:01.563 START TEST nvme_flexible_data_placement 00:24:01.563 ************************************ 00:24:01.563 18:51:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:24:01.821 Initializing NVMe Controllers 00:24:01.821 Attaching to 0000:00:13.0 00:24:01.821 Controller supports FDP Attached to 0000:00:13.0 00:24:01.821 Namespace ID: 1 Endurance Group ID: 1 00:24:01.821 Initialization complete. 00:24:01.821 00:24:01.821 ================================== 00:24:01.821 == FDP tests for Namespace: #01 == 00:24:01.821 ================================== 00:24:01.821 00:24:01.821 Get Feature: FDP: 00:24:01.821 ================= 00:24:01.821 Enabled: Yes 00:24:01.821 FDP configuration Index: 0 00:24:01.821 00:24:01.821 FDP configurations log page 00:24:01.821 =========================== 00:24:01.821 Number of FDP configurations: 1 00:24:01.821 Version: 0 00:24:01.821 Size: 112 00:24:01.821 FDP Configuration Descriptor: 0 00:24:01.821 Descriptor Size: 96 00:24:01.821 Reclaim Group Identifier format: 2 00:24:01.821 FDP Volatile Write Cache: Not Present 00:24:01.821 FDP Configuration: Valid 00:24:01.821 Vendor Specific Size: 0 00:24:01.821 Number of Reclaim Groups: 2 00:24:01.821 Number of Recalim Unit Handles: 8 00:24:01.821 Max Placement Identifiers: 128 00:24:01.821 Number of Namespaces Suppprted: 256 00:24:01.821 Reclaim unit Nominal Size: 6000000 bytes 00:24:01.821 Estimated Reclaim Unit Time Limit: Not Reported 00:24:01.821 RUH Desc #000: RUH Type: Initially Isolated 00:24:01.821 RUH Desc #001: RUH Type: Initially Isolated 00:24:01.821 RUH Desc #002: RUH Type: Initially Isolated 00:24:01.821 RUH Desc #003: RUH Type: Initially Isolated 00:24:01.821 RUH Desc #004: RUH Type: Initially Isolated 00:24:01.821 RUH Desc #005: RUH Type: Initially Isolated 00:24:01.821 RUH Desc #006: RUH Type: Initially Isolated 00:24:01.821 RUH Desc #007: RUH Type: Initially Isolated 00:24:01.821 00:24:01.821 FDP reclaim unit handle usage log page 00:24:01.821 ====================================== 00:24:01.821 Number of Reclaim Unit Handles: 8 00:24:01.821 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:24:01.821 RUH Usage Desc #001: RUH Attributes: Unused 00:24:01.821 RUH Usage Desc #002: RUH Attributes: Unused 00:24:01.821 RUH Usage Desc #003: RUH Attributes: Unused 00:24:01.821 RUH Usage Desc #004: RUH Attributes: Unused 00:24:01.821 RUH Usage Desc #005: RUH Attributes: Unused 00:24:01.821 RUH Usage Desc #006: RUH Attributes: Unused 00:24:01.821 RUH Usage Desc #007: RUH Attributes: Unused 00:24:01.821 00:24:01.821 FDP statistics log page 00:24:01.821 ======================= 00:24:01.821 Host bytes with metadata written: 732430336 00:24:01.821 Media bytes with metadata written: 732512256 00:24:01.821 Media bytes erased: 0 00:24:01.821 00:24:01.821 FDP Reclaim unit handle status 00:24:01.821 ============================== 00:24:01.821 Number of RUHS descriptors: 2 00:24:01.821 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004580 00:24:01.821 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:24:01.821 00:24:01.821 FDP write on placement id: 0 success 00:24:01.821 00:24:01.821 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:24:01.821 00:24:01.821 IO mgmt send: RUH update for Placement ID: #0 Success 00:24:01.821 00:24:01.821 Get Feature: FDP Events for Placement handle: #0 00:24:01.821 ======================== 00:24:01.821 Number of FDP Events: 6 00:24:01.821 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:24:01.821 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:24:01.821 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:24:01.821 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:24:01.821 FDP Event: #4 Type: Media Reallocated Enabled: No 00:24:01.821 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:24:01.821 00:24:01.821 FDP events log page 00:24:01.821 =================== 00:24:01.821 Number of FDP events: 1 00:24:01.821 FDP Event #0: 00:24:01.821 Event Type: RU Not Written to Capacity 00:24:01.821 Placement Identifier: Valid 00:24:01.821 NSID: Valid 00:24:01.821 Location: Valid 00:24:01.821 Placement Identifier: 0 00:24:01.821 Event Timestamp: 9 00:24:01.821 Namespace Identifier: 1 00:24:01.821 Reclaim Group Identifier: 0 00:24:01.821 Reclaim Unit Handle Identifier: 0 00:24:01.821 00:24:01.821 FDP test passed 00:24:01.821 00:24:01.821 real 0m0.318s 00:24:01.821 user 0m0.099s 00:24:01.821 sys 0m0.117s 00:24:01.821 18:51:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.821 18:51:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:24:01.821 ************************************ 00:24:01.821 END TEST nvme_flexible_data_placement 00:24:01.821 ************************************ 00:24:01.821 00:24:01.821 real 0m8.685s 00:24:01.821 user 0m1.443s 00:24:01.821 sys 0m2.199s 00:24:01.821 18:51:30 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.821 18:51:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:24:01.821 ************************************ 00:24:01.821 END TEST nvme_fdp 00:24:01.821 ************************************ 00:24:02.085 18:51:30 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:24:02.085 18:51:30 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:24:02.085 18:51:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:02.085 18:51:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:02.085 18:51:30 -- common/autotest_common.sh@10 -- # set +x 00:24:02.085 ************************************ 00:24:02.085 START TEST nvme_rpc 00:24:02.085 ************************************ 00:24:02.085 18:51:30 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:24:02.085 * Looking for test storage... 00:24:02.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:02.085 18:51:30 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:02.085 18:51:30 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:02.085 18:51:30 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:24:02.085 18:51:30 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:02.085 18:51:30 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:24:02.085 18:51:30 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.085 18:51:30 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:02.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.085 --rc genhtml_branch_coverage=1 00:24:02.085 --rc genhtml_function_coverage=1 00:24:02.085 --rc genhtml_legend=1 00:24:02.085 --rc geninfo_all_blocks=1 00:24:02.085 --rc geninfo_unexecuted_blocks=1 00:24:02.085 00:24:02.085 ' 00:24:02.085 18:51:30 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:02.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.085 --rc genhtml_branch_coverage=1 00:24:02.085 --rc genhtml_function_coverage=1 00:24:02.085 --rc genhtml_legend=1 00:24:02.085 --rc geninfo_all_blocks=1 00:24:02.085 --rc geninfo_unexecuted_blocks=1 00:24:02.085 00:24:02.085 ' 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:02.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.086 --rc genhtml_branch_coverage=1 00:24:02.086 --rc genhtml_function_coverage=1 00:24:02.086 --rc genhtml_legend=1 00:24:02.086 --rc geninfo_all_blocks=1 00:24:02.086 --rc geninfo_unexecuted_blocks=1 00:24:02.086 00:24:02.086 ' 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:02.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.086 --rc genhtml_branch_coverage=1 00:24:02.086 --rc genhtml_function_coverage=1 00:24:02.086 --rc genhtml_legend=1 00:24:02.086 --rc geninfo_all_blocks=1 00:24:02.086 --rc geninfo_unexecuted_blocks=1 00:24:02.086 00:24:02.086 ' 00:24:02.086 18:51:30 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:02.086 18:51:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:02.086 18:51:30 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:24:02.344 18:51:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:24:02.344 18:51:30 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=68168 00:24:02.344 18:51:30 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:24:02.344 18:51:30 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:24:02.344 18:51:30 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 68168 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 68168 ']' 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.344 18:51:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:02.344 [2024-10-08 18:51:31.039388] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:24:02.344 [2024-10-08 18:51:31.039575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68168 ] 00:24:02.603 [2024-10-08 18:51:31.233415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:02.861 [2024-10-08 18:51:31.556969] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.861 [2024-10-08 18:51:31.557003] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.234 18:51:32 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.234 18:51:32 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:24:04.234 18:51:32 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:24:04.234 Nvme0n1 00:24:04.493 18:51:33 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:24:04.493 18:51:33 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:24:04.752 request: 00:24:04.752 { 00:24:04.752 "bdev_name": "Nvme0n1", 00:24:04.752 "filename": "non_existing_file", 00:24:04.752 "method": "bdev_nvme_apply_firmware", 00:24:04.752 "req_id": 1 00:24:04.752 } 00:24:04.752 Got JSON-RPC error response 00:24:04.752 response: 00:24:04.752 { 00:24:04.752 "code": -32603, 00:24:04.752 "message": "open file failed." 00:24:04.752 } 00:24:04.752 18:51:33 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:24:04.752 18:51:33 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:24:04.752 18:51:33 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:05.011 18:51:33 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:24:05.011 18:51:33 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 68168 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 68168 ']' 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 68168 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68168 00:24:05.011 killing process with pid 68168 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68168' 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@969 -- # kill 68168 00:24:05.011 18:51:33 nvme_rpc -- common/autotest_common.sh@974 -- # wait 68168 00:24:08.293 00:24:08.293 real 0m5.768s 00:24:08.293 user 0m10.714s 00:24:08.293 sys 0m0.853s 00:24:08.293 18:51:36 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.293 ************************************ 00:24:08.293 END TEST nvme_rpc 00:24:08.293 ************************************ 00:24:08.293 18:51:36 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:08.293 18:51:36 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:24:08.293 18:51:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:08.293 18:51:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.293 18:51:36 -- common/autotest_common.sh@10 -- # set +x 00:24:08.293 ************************************ 00:24:08.293 START TEST nvme_rpc_timeouts 00:24:08.293 ************************************ 00:24:08.293 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:24:08.293 * Looking for test storage... 00:24:08.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:08.293 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:08.293 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:24:08.293 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:08.293 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.293 18:51:36 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:24:08.293 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.293 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:08.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.294 --rc genhtml_branch_coverage=1 00:24:08.294 --rc genhtml_function_coverage=1 00:24:08.294 --rc genhtml_legend=1 00:24:08.294 --rc geninfo_all_blocks=1 00:24:08.294 --rc geninfo_unexecuted_blocks=1 00:24:08.294 00:24:08.294 ' 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:08.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.294 --rc genhtml_branch_coverage=1 00:24:08.294 --rc genhtml_function_coverage=1 00:24:08.294 --rc genhtml_legend=1 00:24:08.294 --rc geninfo_all_blocks=1 00:24:08.294 --rc geninfo_unexecuted_blocks=1 00:24:08.294 00:24:08.294 ' 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:08.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.294 --rc genhtml_branch_coverage=1 00:24:08.294 --rc genhtml_function_coverage=1 00:24:08.294 --rc genhtml_legend=1 00:24:08.294 --rc geninfo_all_blocks=1 00:24:08.294 --rc geninfo_unexecuted_blocks=1 00:24:08.294 00:24:08.294 ' 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:08.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.294 --rc genhtml_branch_coverage=1 00:24:08.294 --rc genhtml_function_coverage=1 00:24:08.294 --rc genhtml_legend=1 00:24:08.294 --rc geninfo_all_blocks=1 00:24:08.294 --rc geninfo_unexecuted_blocks=1 00:24:08.294 00:24:08.294 ' 00:24:08.294 18:51:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:08.294 18:51:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_68263 00:24:08.294 18:51:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_68263 00:24:08.294 18:51:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=68295 00:24:08.294 18:51:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:24:08.294 18:51:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:24:08.294 18:51:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 68295 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 68295 ']' 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.294 18:51:36 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:24:08.294 [2024-10-08 18:51:36.702504] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:24:08.294 [2024-10-08 18:51:36.702873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68295 ] 00:24:08.294 [2024-10-08 18:51:36.875003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:08.552 [2024-10-08 18:51:37.151792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.552 [2024-10-08 18:51:37.151792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.488 18:51:38 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.488 18:51:38 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:24:09.488 18:51:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:24:09.488 Checking default timeout settings: 00:24:09.488 18:51:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:10.064 18:51:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:24:10.064 Making settings changes with rpc: 00:24:10.064 18:51:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:24:10.064 18:51:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:24:10.064 Check default vs. modified settings: 00:24:10.064 18:51:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_68263 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_68263 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:10.631 Setting action_on_timeout is changed as expected. 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_68263 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_68263 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:24:10.631 Setting timeout_us is changed as expected. 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_68263 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_68263 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:24:10.631 Setting timeout_admin_us is changed as expected. 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_68263 /tmp/settings_modified_68263 00:24:10.631 18:51:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 68295 00:24:10.631 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 68295 ']' 00:24:10.631 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 68295 00:24:10.632 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:24:10.632 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:10.632 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68295 00:24:10.632 killing process with pid 68295 00:24:10.632 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:10.632 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:10.632 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68295' 00:24:10.632 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 68295 00:24:10.632 18:51:39 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 68295 00:24:13.914 RPC TIMEOUT SETTING TEST PASSED. 00:24:13.914 18:51:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:24:13.914 ************************************ 00:24:13.914 END TEST nvme_rpc_timeouts 00:24:13.914 ************************************ 00:24:13.914 00:24:13.914 real 0m5.809s 00:24:13.914 user 0m11.099s 00:24:13.914 sys 0m0.770s 00:24:13.914 18:51:42 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:13.914 18:51:42 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:24:13.914 18:51:42 -- spdk/autotest.sh@239 -- # uname -s 00:24:13.914 18:51:42 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:24:13.914 18:51:42 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:24:13.914 18:51:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:13.914 18:51:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:13.914 18:51:42 -- common/autotest_common.sh@10 -- # set +x 00:24:13.914 ************************************ 00:24:13.914 START TEST sw_hotplug 00:24:13.914 ************************************ 00:24:13.914 18:51:42 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:24:13.914 * Looking for test storage... 00:24:13.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:24:13.914 18:51:42 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:13.914 18:51:42 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:24:13.914 18:51:42 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:13.914 18:51:42 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:24:13.914 18:51:42 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.915 18:51:42 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:24:13.915 18:51:42 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.915 18:51:42 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:13.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.915 --rc genhtml_branch_coverage=1 00:24:13.915 --rc genhtml_function_coverage=1 00:24:13.915 --rc genhtml_legend=1 00:24:13.915 --rc geninfo_all_blocks=1 00:24:13.915 --rc geninfo_unexecuted_blocks=1 00:24:13.915 00:24:13.915 ' 00:24:13.915 18:51:42 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:13.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.915 --rc genhtml_branch_coverage=1 00:24:13.915 --rc genhtml_function_coverage=1 00:24:13.915 --rc genhtml_legend=1 00:24:13.915 --rc geninfo_all_blocks=1 00:24:13.915 --rc geninfo_unexecuted_blocks=1 00:24:13.915 00:24:13.915 ' 00:24:13.915 18:51:42 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:13.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.915 --rc genhtml_branch_coverage=1 00:24:13.915 --rc genhtml_function_coverage=1 00:24:13.915 --rc genhtml_legend=1 00:24:13.915 --rc geninfo_all_blocks=1 00:24:13.915 --rc geninfo_unexecuted_blocks=1 00:24:13.915 00:24:13.915 ' 00:24:13.915 18:51:42 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:13.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.915 --rc genhtml_branch_coverage=1 00:24:13.915 --rc genhtml_function_coverage=1 00:24:13.915 --rc genhtml_legend=1 00:24:13.915 --rc geninfo_all_blocks=1 00:24:13.915 --rc geninfo_unexecuted_blocks=1 00:24:13.915 00:24:13.915 ' 00:24:13.915 18:51:42 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:14.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:14.432 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:14.432 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:14.432 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:14.432 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:14.432 18:51:43 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:24:14.432 18:51:43 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:24:14.432 18:51:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:24:14.432 18:51:43 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@233 -- # local class 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@18 -- # local i 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:14.432 18:51:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:24:14.433 18:51:43 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:24:14.691 18:51:43 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:24:14.691 18:51:43 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:24:14.691 18:51:43 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:14.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:15.206 Waiting for block devices as requested 00:24:15.206 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.206 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.465 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:15.465 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:20.786 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:20.786 18:51:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:24:20.786 18:51:49 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:21.055 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:24:21.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:21.313 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:24:21.572 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:24:21.830 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:21.830 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:24:22.089 18:51:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=69185 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:24:22.089 18:51:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:24:22.089 18:51:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:24:22.089 18:51:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:24:22.089 18:51:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:24:22.089 18:51:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:24:22.089 18:51:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:24:22.347 Initializing NVMe Controllers 00:24:22.347 Attaching to 0000:00:10.0 00:24:22.347 Attaching to 0000:00:11.0 00:24:22.347 Attached to 0000:00:10.0 00:24:22.347 Attached to 0000:00:11.0 00:24:22.347 Initialization complete. Starting I/O... 00:24:22.347 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:24:22.347 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:24:22.347 00:24:23.282 QEMU NVMe Ctrl (12340 ): 1082 I/Os completed (+1082) 00:24:23.282 QEMU NVMe Ctrl (12341 ): 1161 I/Os completed (+1161) 00:24:23.282 00:24:24.660 QEMU NVMe Ctrl (12340 ): 2362 I/Os completed (+1280) 00:24:24.660 QEMU NVMe Ctrl (12341 ): 2451 I/Os completed (+1290) 00:24:24.660 00:24:25.595 QEMU NVMe Ctrl (12340 ): 4016 I/Os completed (+1654) 00:24:25.595 QEMU NVMe Ctrl (12341 ): 4148 I/Os completed (+1697) 00:24:25.595 00:24:26.530 QEMU NVMe Ctrl (12340 ): 5707 I/Os completed (+1691) 00:24:26.530 QEMU NVMe Ctrl (12341 ): 5842 I/Os completed (+1694) 00:24:26.530 00:24:27.466 QEMU NVMe Ctrl (12340 ): 6991 I/Os completed (+1284) 00:24:27.466 QEMU NVMe Ctrl (12341 ): 7157 I/Os completed (+1315) 00:24:27.466 00:24:28.055 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:28.055 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:28.055 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:28.055 [2024-10-08 18:51:56.733568] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:28.055 Controller removed: QEMU NVMe Ctrl (12340 ) 00:24:28.055 [2024-10-08 18:51:56.736742] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.736995] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.737186] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.737233] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:24:28.055 [2024-10-08 18:51:56.741669] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.741745] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.741774] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.741807] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:28.055 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:28.055 [2024-10-08 18:51:56.772898] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:28.055 Controller removed: QEMU NVMe Ctrl (12341 ) 00:24:28.055 [2024-10-08 18:51:56.775880] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.776157] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.776218] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.776249] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:24:28.055 [2024-10-08 18:51:56.780515] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.780732] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.780928] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 [2024-10-08 18:51:56.781149] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:28.055 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:24:28.055 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:24:28.055 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:28.055 EAL: Scan for (pci) bus failed. 00:24:28.314 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:28.314 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:28.314 18:51:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:28.314 00:24:28.314 18:51:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:28.572 18:51:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:28.572 18:51:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:28.572 18:51:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:28.572 18:51:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:28.572 Attaching to 0000:00:10.0 00:24:28.572 Attached to 0000:00:10.0 00:24:28.572 18:51:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:28.572 18:51:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:28.572 18:51:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:28.572 Attaching to 0000:00:11.0 00:24:28.572 Attached to 0000:00:11.0 00:24:29.508 QEMU NVMe Ctrl (12340 ): 1440 I/Os completed (+1440) 00:24:29.508 QEMU NVMe Ctrl (12341 ): 1285 I/Os completed (+1285) 00:24:29.508 00:24:30.449 QEMU NVMe Ctrl (12340 ): 3012 I/Os completed (+1572) 00:24:30.449 QEMU NVMe Ctrl (12341 ): 2863 I/Os completed (+1578) 00:24:30.449 00:24:31.384 QEMU NVMe Ctrl (12340 ): 4588 I/Os completed (+1576) 00:24:31.384 QEMU NVMe Ctrl (12341 ): 4460 I/Os completed (+1597) 00:24:31.384 00:24:32.319 QEMU NVMe Ctrl (12340 ): 5927 I/Os completed (+1339) 00:24:32.319 QEMU NVMe Ctrl (12341 ): 5966 I/Os completed (+1506) 00:24:32.319 00:24:33.253 QEMU NVMe Ctrl (12340 ): 7607 I/Os completed (+1680) 00:24:33.253 QEMU NVMe Ctrl (12341 ): 7790 I/Os completed (+1824) 00:24:33.253 00:24:34.628 QEMU NVMe Ctrl (12340 ): 9572 I/Os completed (+1965) 00:24:34.628 QEMU NVMe Ctrl (12341 ): 9801 I/Os completed (+2011) 00:24:34.628 00:24:35.563 QEMU NVMe Ctrl (12340 ): 10882 I/Os completed (+1310) 00:24:35.563 QEMU NVMe Ctrl (12341 ): 11225 I/Os completed (+1424) 00:24:35.563 00:24:36.498 QEMU NVMe Ctrl (12340 ): 12416 I/Os completed (+1534) 00:24:36.498 QEMU NVMe Ctrl (12341 ): 12787 I/Os completed (+1562) 00:24:36.498 00:24:37.433 QEMU NVMe Ctrl (12340 ): 13739 I/Os completed (+1323) 00:24:37.433 QEMU NVMe Ctrl (12341 ): 14158 I/Os completed (+1371) 00:24:37.433 00:24:38.369 QEMU NVMe Ctrl (12340 ): 14962 I/Os completed (+1223) 00:24:38.369 QEMU NVMe Ctrl (12341 ): 15470 I/Os completed (+1312) 00:24:38.369 00:24:39.305 QEMU NVMe Ctrl (12340 ): 16366 I/Os completed (+1404) 00:24:39.305 QEMU NVMe Ctrl (12341 ): 16891 I/Os completed (+1421) 00:24:39.305 00:24:40.240 QEMU NVMe Ctrl (12340 ): 17671 I/Os completed (+1305) 00:24:40.241 QEMU NVMe Ctrl (12341 ): 18205 I/Os completed (+1314) 00:24:40.241 00:24:40.499 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:24:40.499 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:40.499 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:40.499 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:40.499 [2024-10-08 18:52:09.184845] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:40.499 Controller removed: QEMU NVMe Ctrl (12340 ) 00:24:40.500 [2024-10-08 18:52:09.187167] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.187377] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.187446] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.187565] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:24:40.500 [2024-10-08 18:52:09.191260] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.191448] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.191511] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.191646] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:40.500 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:40.500 [2024-10-08 18:52:09.221514] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:40.500 Controller removed: QEMU NVMe Ctrl (12341 ) 00:24:40.500 [2024-10-08 18:52:09.223468] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.223528] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.223561] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.223585] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:24:40.500 [2024-10-08 18:52:09.226704] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.226879] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.226912] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 [2024-10-08 18:52:09.226936] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.500 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:24:40.500 EAL: Scan for (pci) bus failed. 00:24:40.500 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:40.759 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:40.759 Attaching to 0000:00:10.0 00:24:40.759 Attached to 0000:00:10.0 00:24:41.017 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:41.017 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:41.017 18:52:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:41.017 Attaching to 0000:00:11.0 00:24:41.017 Attached to 0000:00:11.0 00:24:41.274 QEMU NVMe Ctrl (12340 ): 760 I/Os completed (+760) 00:24:41.274 QEMU NVMe Ctrl (12341 ): 611 I/Os completed (+611) 00:24:41.274 00:24:42.648 QEMU NVMe Ctrl (12340 ): 2187 I/Os completed (+1427) 00:24:42.648 QEMU NVMe Ctrl (12341 ): 2272 I/Os completed (+1661) 00:24:42.648 00:24:43.584 QEMU NVMe Ctrl (12340 ): 3494 I/Os completed (+1307) 00:24:43.584 QEMU NVMe Ctrl (12341 ): 3592 I/Os completed (+1320) 00:24:43.584 00:24:44.520 QEMU NVMe Ctrl (12340 ): 4945 I/Os completed (+1451) 00:24:44.520 QEMU NVMe Ctrl (12341 ): 5065 I/Os completed (+1473) 00:24:44.520 00:24:45.453 QEMU NVMe Ctrl (12340 ): 6517 I/Os completed (+1572) 00:24:45.453 QEMU NVMe Ctrl (12341 ): 6648 I/Os completed (+1583) 00:24:45.453 00:24:46.387 QEMU NVMe Ctrl (12340 ): 8015 I/Os completed (+1498) 00:24:46.387 QEMU NVMe Ctrl (12341 ): 8161 I/Os completed (+1513) 00:24:46.387 00:24:47.321 QEMU NVMe Ctrl (12340 ): 9651 I/Os completed (+1636) 00:24:47.321 QEMU NVMe Ctrl (12341 ): 9801 I/Os completed (+1640) 00:24:47.321 00:24:48.257 QEMU NVMe Ctrl (12340 ): 11100 I/Os completed (+1449) 00:24:48.257 QEMU NVMe Ctrl (12341 ): 11287 I/Os completed (+1486) 00:24:48.257 00:24:49.633 QEMU NVMe Ctrl (12340 ): 12808 I/Os completed (+1708) 00:24:49.633 QEMU NVMe Ctrl (12341 ): 12997 I/Os completed (+1710) 00:24:49.633 00:24:50.603 QEMU NVMe Ctrl (12340 ): 14398 I/Os completed (+1590) 00:24:50.603 QEMU NVMe Ctrl (12341 ): 14597 I/Os completed (+1600) 00:24:50.603 00:24:51.538 QEMU NVMe Ctrl (12340 ): 15750 I/Os completed (+1352) 00:24:51.538 QEMU NVMe Ctrl (12341 ): 15954 I/Os completed (+1357) 00:24:51.538 00:24:52.473 QEMU NVMe Ctrl (12340 ): 17282 I/Os completed (+1532) 00:24:52.473 QEMU NVMe Ctrl (12341 ): 17487 I/Os completed (+1533) 00:24:52.473 00:24:53.040 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:24:53.040 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:53.040 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:53.040 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:53.040 [2024-10-08 18:52:21.605085] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:53.040 Controller removed: QEMU NVMe Ctrl (12340 ) 00:24:53.040 [2024-10-08 18:52:21.609212] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.609449] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.609539] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.609622] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:24:53.040 [2024-10-08 18:52:21.614024] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.614248] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.614337] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.614414] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:53.040 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:53.040 [2024-10-08 18:52:21.644747] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:53.040 Controller removed: QEMU NVMe Ctrl (12341 ) 00:24:53.040 [2024-10-08 18:52:21.646662] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.646726] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.646754] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.646778] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:24:53.040 [2024-10-08 18:52:21.649911] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.649980] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.650010] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 [2024-10-08 18:52:21.650030] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.040 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:24:53.040 EAL: Scan for (pci) bus failed. 00:24:53.040 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:24:53.040 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:53.298 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:53.298 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:53.298 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:53.298 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:53.298 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:53.298 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:53.298 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:53.298 18:52:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:53.298 Attaching to 0000:00:10.0 00:24:53.298 Attached to 0000:00:10.0 00:24:53.298 QEMU NVMe Ctrl (12340 ): 104 I/Os completed (+104) 00:24:53.298 00:24:53.298 18:52:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:53.298 18:52:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:53.298 18:52:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:53.298 Attaching to 0000:00:11.0 00:24:53.298 Attached to 0000:00:11.0 00:24:53.298 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:24:53.298 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:24:53.557 [2024-10-08 18:52:22.055963] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:25:05.827 18:52:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:25:05.827 18:52:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:05.827 18:52:34 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.32 00:25:05.827 18:52:34 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.32 00:25:05.827 18:52:34 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:25:05.827 18:52:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.32 00:25:05.827 18:52:34 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.32 2 00:25:05.827 remove_attach_helper took 43.32s to complete (handling 2 nvme drive(s)) 18:52:34 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:25:12.386 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 69185 00:25:12.387 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (69185) - No such process 00:25:12.387 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 69185 00:25:12.387 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:25:12.387 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:25:12.387 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:25:12.387 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69725 00:25:12.387 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:12.387 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:25:12.387 18:52:40 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69725 00:25:12.387 18:52:40 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 69725 ']' 00:25:12.387 18:52:40 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.387 18:52:40 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.387 18:52:40 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.387 18:52:40 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.387 18:52:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:12.387 [2024-10-08 18:52:40.197128] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:25:12.387 [2024-10-08 18:52:40.197329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69725 ] 00:25:12.387 [2024-10-08 18:52:40.386103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.387 [2024-10-08 18:52:40.682183] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:25:12.953 18:52:41 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:25:12.953 18:52:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:19.555 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:19.555 18:52:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.555 18:52:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:19.556 [2024-10-08 18:52:47.759307] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:25:19.556 [2024-10-08 18:52:47.762171] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:19.556 [2024-10-08 18:52:47.762227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.556 [2024-10-08 18:52:47.762248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.556 [2024-10-08 18:52:47.762279] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:19.556 [2024-10-08 18:52:47.762310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.556 [2024-10-08 18:52:47.762327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.556 [2024-10-08 18:52:47.762343] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:19.556 [2024-10-08 18:52:47.762361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.556 [2024-10-08 18:52:47.762375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.556 [2024-10-08 18:52:47.762396] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:19.556 [2024-10-08 18:52:47.762409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.556 [2024-10-08 18:52:47.762426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.556 18:52:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.556 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:25:19.556 18:52:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:19.556 [2024-10-08 18:52:48.159333] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:25:19.556 [2024-10-08 18:52:48.162324] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:19.556 [2024-10-08 18:52:48.162375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.556 [2024-10-08 18:52:48.162397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.556 [2024-10-08 18:52:48.162423] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:19.556 [2024-10-08 18:52:48.162440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.556 [2024-10-08 18:52:48.162454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.556 [2024-10-08 18:52:48.162471] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:19.556 [2024-10-08 18:52:48.162484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.556 [2024-10-08 18:52:48.162500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.556 [2024-10-08 18:52:48.162514] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:19.556 [2024-10-08 18:52:48.162529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.556 [2024-10-08 18:52:48.162542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.556 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:25:19.556 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:19.556 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:19.556 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:19.556 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:19.556 18:52:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.556 18:52:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:19.556 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:19.556 18:52:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.817 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:19.817 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:19.817 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:19.817 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:19.817 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:19.817 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:20.075 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:20.075 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:20.075 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:20.075 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:20.075 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:20.075 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:20.075 18:52:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:32.272 18:53:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.272 18:53:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:32.272 18:53:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:32.272 18:53:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.272 18:53:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:32.272 [2024-10-08 18:53:00.859618] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:25:32.272 [2024-10-08 18:53:00.862482] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:32.272 [2024-10-08 18:53:00.862531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.272 [2024-10-08 18:53:00.862549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.272 [2024-10-08 18:53:00.862600] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:32.272 [2024-10-08 18:53:00.862618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.272 [2024-10-08 18:53:00.862635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.272 [2024-10-08 18:53:00.862650] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:32.272 [2024-10-08 18:53:00.862665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.272 [2024-10-08 18:53:00.862679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.272 [2024-10-08 18:53:00.862695] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:32.272 [2024-10-08 18:53:00.862708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.272 [2024-10-08 18:53:00.862723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.272 18:53:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:25:32.272 18:53:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:32.530 [2024-10-08 18:53:01.259628] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:25:32.530 [2024-10-08 18:53:01.262535] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:32.530 [2024-10-08 18:53:01.262585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.530 [2024-10-08 18:53:01.262612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.530 [2024-10-08 18:53:01.262641] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:32.530 [2024-10-08 18:53:01.262658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.530 [2024-10-08 18:53:01.262672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.530 [2024-10-08 18:53:01.262691] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:32.530 [2024-10-08 18:53:01.262704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.530 [2024-10-08 18:53:01.262721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.530 [2024-10-08 18:53:01.262735] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:32.530 [2024-10-08 18:53:01.262751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.530 [2024-10-08 18:53:01.262765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.788 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:25:32.788 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:32.788 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:32.788 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:32.788 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:32.788 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:32.788 18:53:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.788 18:53:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:32.788 18:53:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.788 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:32.788 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:33.046 18:53:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:45.316 18:53:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:45.316 18:53:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:45.316 18:53:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:45.316 [2024-10-08 18:53:13.860003] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:25:45.316 [2024-10-08 18:53:13.863361] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:45.316 [2024-10-08 18:53:13.863413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.316 [2024-10-08 18:53:13.863434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.316 [2024-10-08 18:53:13.863464] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:45.316 [2024-10-08 18:53:13.863478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.316 [2024-10-08 18:53:13.863498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.316 [2024-10-08 18:53:13.863514] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:45.316 [2024-10-08 18:53:13.863530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.316 [2024-10-08 18:53:13.863544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.316 [2024-10-08 18:53:13.863561] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:45.316 [2024-10-08 18:53:13.863574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.316 [2024-10-08 18:53:13.863591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:45.316 18:53:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.316 18:53:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:45.316 18:53:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:25:45.316 18:53:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:45.625 [2024-10-08 18:53:14.260008] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:25:45.625 [2024-10-08 18:53:14.262906] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:45.625 [2024-10-08 18:53:14.262965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.625 [2024-10-08 18:53:14.262986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.625 [2024-10-08 18:53:14.263013] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:45.625 [2024-10-08 18:53:14.263029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.625 [2024-10-08 18:53:14.263042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.625 [2024-10-08 18:53:14.263058] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:45.625 [2024-10-08 18:53:14.263070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.625 [2024-10-08 18:53:14.263087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.625 [2024-10-08 18:53:14.263100] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:45.625 [2024-10-08 18:53:14.263113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.625 [2024-10-08 18:53:14.263125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:45.900 18:53:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.900 18:53:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:45.900 18:53:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:45.900 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:46.159 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:46.159 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:46.159 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:46.159 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:46.159 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:46.159 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:46.159 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:46.159 18:53:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.23 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.23 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.23 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.23 2 00:25:58.359 remove_attach_helper took 45.23s to complete (handling 2 nvme drive(s)) 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:25:58.359 18:53:26 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:25:58.359 18:53:26 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:26:04.914 18:53:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:04.914 18:53:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:04.914 18:53:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:04.914 18:53:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:04.914 18:53:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:04.914 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:04.915 18:53:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.915 18:53:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:04.915 [2024-10-08 18:53:33.023829] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:26:04.915 [2024-10-08 18:53:33.025900] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:04.915 [2024-10-08 18:53:33.025952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.915 [2024-10-08 18:53:33.025985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.915 [2024-10-08 18:53:33.026015] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:04.915 [2024-10-08 18:53:33.026030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.915 [2024-10-08 18:53:33.026047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.915 [2024-10-08 18:53:33.026062] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:04.915 [2024-10-08 18:53:33.026078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.915 [2024-10-08 18:53:33.026092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.915 [2024-10-08 18:53:33.026109] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:04.915 [2024-10-08 18:53:33.026122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.915 [2024-10-08 18:53:33.026144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.915 18:53:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:26:04.915 [2024-10-08 18:53:33.423855] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:26:04.915 [2024-10-08 18:53:33.426497] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:04.915 [2024-10-08 18:53:33.426558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.915 [2024-10-08 18:53:33.426582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.915 [2024-10-08 18:53:33.426608] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:04.915 [2024-10-08 18:53:33.426624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.915 [2024-10-08 18:53:33.426637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.915 [2024-10-08 18:53:33.426655] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:04.915 [2024-10-08 18:53:33.426667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.915 [2024-10-08 18:53:33.426683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.915 [2024-10-08 18:53:33.426698] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:04.915 [2024-10-08 18:53:33.426713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.915 [2024-10-08 18:53:33.426725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:04.915 18:53:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.915 18:53:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:04.915 18:53:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:26:04.915 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:05.173 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:05.173 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:05.173 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:05.173 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:05.173 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:05.173 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:05.173 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:05.173 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:05.432 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:05.432 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:05.432 18:53:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:17.629 18:53:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.629 18:53:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:17.629 18:53:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:17.629 [2024-10-08 18:53:46.124161] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:17.629 [2024-10-08 18:53:46.126280] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:17.629 [2024-10-08 18:53:46.126338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.629 [2024-10-08 18:53:46.126358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.629 [2024-10-08 18:53:46.126390] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:17.629 [2024-10-08 18:53:46.126415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.629 [2024-10-08 18:53:46.126432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.629 [2024-10-08 18:53:46.126448] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:17.629 [2024-10-08 18:53:46.126464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.629 [2024-10-08 18:53:46.126479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.629 [2024-10-08 18:53:46.126498] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:17.629 [2024-10-08 18:53:46.126511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.629 [2024-10-08 18:53:46.126531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:17.629 18:53:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:17.629 18:53:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:17.629 18:53:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:26:17.629 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:26:17.887 [2024-10-08 18:53:46.524192] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:26:17.887 [2024-10-08 18:53:46.526322] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:17.887 [2024-10-08 18:53:46.526372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.887 [2024-10-08 18:53:46.526396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.887 [2024-10-08 18:53:46.526424] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:17.887 [2024-10-08 18:53:46.526445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.887 [2024-10-08 18:53:46.526459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.887 [2024-10-08 18:53:46.526479] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:17.887 [2024-10-08 18:53:46.526492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.887 [2024-10-08 18:53:46.526510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.887 [2024-10-08 18:53:46.526525] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:17.887 [2024-10-08 18:53:46.526541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.887 [2024-10-08 18:53:46.526556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:18.145 18:53:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.145 18:53:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:18.145 18:53:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:18.145 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:18.146 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:18.146 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:18.446 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:18.446 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:18.446 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:18.446 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:18.446 18:53:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:18.446 18:53:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:18.446 18:53:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:18.446 18:53:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:30.644 18:53:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.644 18:53:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:30.644 18:53:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:30.644 18:53:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.644 18:53:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:30.644 [2024-10-08 18:53:59.224474] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:26:30.644 [2024-10-08 18:53:59.226909] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:30.644 [2024-10-08 18:53:59.226979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.644 [2024-10-08 18:53:59.227002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.644 [2024-10-08 18:53:59.227079] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:30.644 [2024-10-08 18:53:59.227100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.644 [2024-10-08 18:53:59.227118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.644 [2024-10-08 18:53:59.227135] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:30.644 [2024-10-08 18:53:59.227157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.644 [2024-10-08 18:53:59.227170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.644 [2024-10-08 18:53:59.227190] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:30.644 [2024-10-08 18:53:59.227203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.644 [2024-10-08 18:53:59.227221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.644 18:53:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:26:30.644 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:26:30.903 [2024-10-08 18:53:59.624521] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:26:30.903 [2024-10-08 18:53:59.627137] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:30.903 [2024-10-08 18:53:59.627182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.903 [2024-10-08 18:53:59.627208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.903 [2024-10-08 18:53:59.627241] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:30.903 [2024-10-08 18:53:59.627271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.903 [2024-10-08 18:53:59.627286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.903 [2024-10-08 18:53:59.627307] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:30.903 [2024-10-08 18:53:59.627321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.903 [2024-10-08 18:53:59.627339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.903 [2024-10-08 18:53:59.627358] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:26:30.903 [2024-10-08 18:53:59.627379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.903 [2024-10-08 18:53:59.627394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.162 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:26:31.162 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:26:31.162 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:26:31.162 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:31.162 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:31.162 18:53:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.162 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:31.162 18:53:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:31.162 18:53:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.162 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:26:31.162 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:26:31.421 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:31.421 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:31.421 18:53:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:26:31.421 18:54:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:26:31.421 18:54:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:31.421 18:54:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:26:31.421 18:54:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:26:31.421 18:54:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:26:31.421 18:54:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:26:31.421 18:54:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:26:31.421 18:54:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.28 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.28 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.28 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.28 2 00:26:43.620 remove_attach_helper took 45.28s to complete (handling 2 nvme drive(s)) 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:26:43.620 18:54:12 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69725 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 69725 ']' 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 69725 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69725 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:43.620 killing process with pid 69725 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69725' 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@969 -- # kill 69725 00:26:43.620 18:54:12 sw_hotplug -- common/autotest_common.sh@974 -- # wait 69725 00:26:46.901 18:54:15 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:46.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:47.468 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:47.468 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:47.468 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:26:47.468 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:26:47.765 00:26:47.765 real 2m33.987s 00:26:47.765 user 1m51.744s 00:26:47.765 sys 0m22.810s 00:26:47.765 18:54:16 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.765 18:54:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:26:47.765 ************************************ 00:26:47.765 END TEST sw_hotplug 00:26:47.765 ************************************ 00:26:47.765 18:54:16 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:26:47.765 18:54:16 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:26:47.765 18:54:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:47.765 18:54:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:47.765 18:54:16 -- common/autotest_common.sh@10 -- # set +x 00:26:47.765 ************************************ 00:26:47.765 START TEST nvme_xnvme 00:26:47.765 ************************************ 00:26:47.765 18:54:16 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:26:47.765 * Looking for test storage... 00:26:47.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:26:47.765 18:54:16 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:47.765 18:54:16 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:26:47.765 18:54:16 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:48.044 18:54:16 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.044 18:54:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:26:48.045 18:54:16 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.045 18:54:16 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:48.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.045 --rc genhtml_branch_coverage=1 00:26:48.045 --rc genhtml_function_coverage=1 00:26:48.045 --rc genhtml_legend=1 00:26:48.045 --rc geninfo_all_blocks=1 00:26:48.045 --rc geninfo_unexecuted_blocks=1 00:26:48.045 00:26:48.045 ' 00:26:48.045 18:54:16 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:48.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.045 --rc genhtml_branch_coverage=1 00:26:48.045 --rc genhtml_function_coverage=1 00:26:48.045 --rc genhtml_legend=1 00:26:48.045 --rc geninfo_all_blocks=1 00:26:48.045 --rc geninfo_unexecuted_blocks=1 00:26:48.045 00:26:48.045 ' 00:26:48.045 18:54:16 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:48.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.045 --rc genhtml_branch_coverage=1 00:26:48.045 --rc genhtml_function_coverage=1 00:26:48.045 --rc genhtml_legend=1 00:26:48.045 --rc geninfo_all_blocks=1 00:26:48.045 --rc geninfo_unexecuted_blocks=1 00:26:48.045 00:26:48.045 ' 00:26:48.045 18:54:16 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:48.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.045 --rc genhtml_branch_coverage=1 00:26:48.045 --rc genhtml_function_coverage=1 00:26:48.045 --rc genhtml_legend=1 00:26:48.045 --rc geninfo_all_blocks=1 00:26:48.045 --rc geninfo_unexecuted_blocks=1 00:26:48.045 00:26:48.045 ' 00:26:48.045 18:54:16 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.045 18:54:16 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.045 18:54:16 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.045 18:54:16 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.045 18:54:16 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.045 18:54:16 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:26:48.045 18:54:16 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.045 18:54:16 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:26:48.045 18:54:16 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:48.045 18:54:16 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.045 18:54:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:48.045 ************************************ 00:26:48.045 START TEST xnvme_to_malloc_dd_copy 00:26:48.045 ************************************ 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:26:48.045 18:54:16 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:26:48.045 { 00:26:48.045 "subsystems": [ 00:26:48.045 { 00:26:48.045 "subsystem": "bdev", 00:26:48.045 "config": [ 00:26:48.045 { 00:26:48.045 "params": { 00:26:48.045 "block_size": 512, 00:26:48.045 "num_blocks": 2097152, 00:26:48.045 "name": "malloc0" 00:26:48.045 }, 00:26:48.045 "method": "bdev_malloc_create" 00:26:48.045 }, 00:26:48.045 { 00:26:48.045 "params": { 00:26:48.045 "io_mechanism": "libaio", 00:26:48.045 "filename": "/dev/nullb0", 00:26:48.045 "name": "null0" 00:26:48.045 }, 00:26:48.045 "method": "bdev_xnvme_create" 00:26:48.045 }, 00:26:48.045 { 00:26:48.045 "method": "bdev_wait_for_examine" 00:26:48.045 } 00:26:48.045 ] 00:26:48.045 } 00:26:48.045 ] 00:26:48.045 } 00:26:48.045 [2024-10-08 18:54:16.698206] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:26:48.045 [2024-10-08 18:54:16.698386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71096 ] 00:26:48.304 [2024-10-08 18:54:16.889515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.562 [2024-10-08 18:54:17.213000] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.845  [2024-10-08T18:54:21.168Z] Copying: 195/1024 [MB] (195 MBps) [2024-10-08T18:54:22.116Z] Copying: 389/1024 [MB] (193 MBps) [2024-10-08T18:54:23.078Z] Copying: 600/1024 [MB] (210 MBps) [2024-10-08T18:54:24.011Z] Copying: 814/1024 [MB] (213 MBps) [2024-10-08T18:54:29.274Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:27:00.517 00:27:00.517 18:54:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:27:00.517 18:54:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:27:00.517 18:54:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:27:00.517 18:54:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:27:00.517 { 00:27:00.517 "subsystems": [ 00:27:00.517 { 00:27:00.517 "subsystem": "bdev", 00:27:00.517 "config": [ 00:27:00.517 { 00:27:00.517 "params": { 00:27:00.517 "block_size": 512, 00:27:00.517 "num_blocks": 2097152, 00:27:00.517 "name": "malloc0" 00:27:00.517 }, 00:27:00.517 "method": "bdev_malloc_create" 00:27:00.517 }, 00:27:00.517 { 00:27:00.517 "params": { 00:27:00.517 "io_mechanism": "libaio", 00:27:00.517 "filename": "/dev/nullb0", 00:27:00.517 "name": "null0" 00:27:00.517 }, 00:27:00.517 "method": "bdev_xnvme_create" 00:27:00.517 }, 00:27:00.517 { 00:27:00.517 "method": "bdev_wait_for_examine" 00:27:00.517 } 00:27:00.517 ] 00:27:00.517 } 00:27:00.517 ] 00:27:00.517 } 00:27:00.517 [2024-10-08 18:54:28.769422] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:00.517 [2024-10-08 18:54:28.769603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71233 ] 00:27:00.517 [2024-10-08 18:54:28.940677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.517 [2024-10-08 18:54:29.171224] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.801  [2024-10-08T18:54:33.123Z] Copying: 205/1024 [MB] (205 MBps) [2024-10-08T18:54:34.055Z] Copying: 412/1024 [MB] (207 MBps) [2024-10-08T18:54:34.988Z] Copying: 623/1024 [MB] (210 MBps) [2024-10-08T18:54:35.921Z] Copying: 830/1024 [MB] (206 MBps) [2024-10-08T18:54:41.209Z] Copying: 1024/1024 [MB] (average 207 MBps) 00:27:12.452 00:27:12.452 18:54:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:27:12.452 18:54:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:27:12.452 18:54:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:27:12.452 18:54:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:27:12.452 18:54:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:27:12.452 18:54:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:27:12.452 { 00:27:12.452 "subsystems": [ 00:27:12.452 { 00:27:12.452 "subsystem": "bdev", 00:27:12.453 "config": [ 00:27:12.453 { 00:27:12.453 "params": { 00:27:12.453 "block_size": 512, 00:27:12.453 "num_blocks": 2097152, 00:27:12.453 "name": "malloc0" 00:27:12.453 }, 00:27:12.453 "method": "bdev_malloc_create" 00:27:12.453 }, 00:27:12.453 { 00:27:12.453 "params": { 00:27:12.453 "io_mechanism": "io_uring", 00:27:12.453 "filename": "/dev/nullb0", 00:27:12.453 "name": "null0" 00:27:12.453 }, 00:27:12.453 "method": "bdev_xnvme_create" 00:27:12.453 }, 00:27:12.453 { 00:27:12.453 "method": "bdev_wait_for_examine" 00:27:12.453 } 00:27:12.453 ] 00:27:12.453 } 00:27:12.453 ] 00:27:12.453 } 00:27:12.453 [2024-10-08 18:54:40.817506] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:12.453 [2024-10-08 18:54:40.817702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71359 ] 00:27:12.453 [2024-10-08 18:54:41.005310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.724 [2024-10-08 18:54:41.245488] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.286  [2024-10-08T18:54:44.977Z] Copying: 240/1024 [MB] (240 MBps) [2024-10-08T18:54:45.912Z] Copying: 473/1024 [MB] (233 MBps) [2024-10-08T18:54:47.287Z] Copying: 720/1024 [MB] (246 MBps) [2024-10-08T18:54:47.287Z] Copying: 957/1024 [MB] (237 MBps) [2024-10-08T18:54:52.555Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:27:23.798 00:27:23.798 18:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:27:23.798 18:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:27:23.798 18:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:27:23.798 18:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:27:23.798 { 00:27:23.798 "subsystems": [ 00:27:23.798 { 00:27:23.798 "subsystem": "bdev", 00:27:23.798 "config": [ 00:27:23.798 { 00:27:23.798 "params": { 00:27:23.798 "block_size": 512, 00:27:23.798 "num_blocks": 2097152, 00:27:23.798 "name": "malloc0" 00:27:23.798 }, 00:27:23.798 "method": "bdev_malloc_create" 00:27:23.798 }, 00:27:23.798 { 00:27:23.798 "params": { 00:27:23.798 "io_mechanism": "io_uring", 00:27:23.798 "filename": "/dev/nullb0", 00:27:23.798 "name": "null0" 00:27:23.798 }, 00:27:23.798 "method": "bdev_xnvme_create" 00:27:23.798 }, 00:27:23.798 { 00:27:23.798 "method": "bdev_wait_for_examine" 00:27:23.798 } 00:27:23.798 ] 00:27:23.798 } 00:27:23.798 ] 00:27:23.798 } 00:27:23.798 [2024-10-08 18:54:51.736614] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:23.798 [2024-10-08 18:54:51.736802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71488 ] 00:27:23.798 [2024-10-08 18:54:51.914124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.798 [2024-10-08 18:54:52.189093] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.329  [2024-10-08T18:54:56.022Z] Copying: 250/1024 [MB] (250 MBps) [2024-10-08T18:54:56.958Z] Copying: 501/1024 [MB] (251 MBps) [2024-10-08T18:54:57.893Z] Copying: 754/1024 [MB] (253 MBps) [2024-10-08T18:54:57.893Z] Copying: 1006/1024 [MB] (251 MBps) [2024-10-08T18:55:03.229Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:27:34.472 00:27:34.472 18:55:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:27:34.472 18:55:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:27:34.472 00:27:34.472 real 0m45.718s 00:27:34.472 user 0m40.213s 00:27:34.472 sys 0m4.891s 00:27:34.472 18:55:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:34.472 18:55:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:27:34.472 ************************************ 00:27:34.472 END TEST xnvme_to_malloc_dd_copy 00:27:34.472 ************************************ 00:27:34.472 18:55:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:27:34.472 18:55:02 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:34.472 18:55:02 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:34.472 18:55:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:34.472 ************************************ 00:27:34.472 START TEST xnvme_bdevperf 00:27:34.472 ************************************ 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:27:34.472 18:55:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:34.472 { 00:27:34.472 "subsystems": [ 00:27:34.472 { 00:27:34.472 "subsystem": "bdev", 00:27:34.472 "config": [ 00:27:34.472 { 00:27:34.472 "params": { 00:27:34.472 "io_mechanism": "libaio", 00:27:34.472 "filename": "/dev/nullb0", 00:27:34.472 "name": "null0" 00:27:34.472 }, 00:27:34.472 "method": "bdev_xnvme_create" 00:27:34.472 }, 00:27:34.472 { 00:27:34.472 "method": "bdev_wait_for_examine" 00:27:34.472 } 00:27:34.472 ] 00:27:34.472 } 00:27:34.472 ] 00:27:34.472 } 00:27:34.472 [2024-10-08 18:55:02.450796] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:34.472 [2024-10-08 18:55:02.451003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71629 ] 00:27:34.472 [2024-10-08 18:55:02.625462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.472 [2024-10-08 18:55:02.866214] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.731 Running I/O for 5 seconds... 00:27:36.601 122240.00 IOPS, 477.50 MiB/s [2024-10-08T18:55:06.295Z] 135712.00 IOPS, 530.12 MiB/s [2024-10-08T18:55:07.672Z] 140266.67 IOPS, 547.92 MiB/s [2024-10-08T18:55:08.607Z] 142928.00 IOPS, 558.31 MiB/s 00:27:39.850 Latency(us) 00:27:39.850 [2024-10-08T18:55:08.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.850 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:27:39.850 null0 : 5.00 144200.27 563.28 0.00 0.00 441.23 156.04 2278.16 00:27:39.850 [2024-10-08T18:55:08.607Z] =================================================================================================================== 00:27:39.850 [2024-10-08T18:55:08.607Z] Total : 144200.27 563.28 0.00 0.00 441.23 156.04 2278.16 00:27:41.225 18:55:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:27:41.225 18:55:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:27:41.225 18:55:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:27:41.225 18:55:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:27:41.225 18:55:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:27:41.225 18:55:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.225 { 00:27:41.225 "subsystems": [ 00:27:41.225 { 00:27:41.225 "subsystem": "bdev", 00:27:41.225 "config": [ 00:27:41.225 { 00:27:41.225 "params": { 00:27:41.225 "io_mechanism": "io_uring", 00:27:41.225 "filename": "/dev/nullb0", 00:27:41.225 "name": "null0" 00:27:41.225 }, 00:27:41.225 "method": "bdev_xnvme_create" 00:27:41.225 }, 00:27:41.225 { 00:27:41.225 "method": "bdev_wait_for_examine" 00:27:41.225 } 00:27:41.225 ] 00:27:41.225 } 00:27:41.225 ] 00:27:41.225 } 00:27:41.483 [2024-10-08 18:55:10.049019] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:41.483 [2024-10-08 18:55:10.049191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71718 ] 00:27:41.742 [2024-10-08 18:55:10.242066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.001 [2024-10-08 18:55:10.508216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.260 Running I/O for 5 seconds... 00:27:44.132 189888.00 IOPS, 741.75 MiB/s [2024-10-08T18:55:14.265Z] 190976.00 IOPS, 746.00 MiB/s [2024-10-08T18:55:15.223Z] 191125.33 IOPS, 746.58 MiB/s [2024-10-08T18:55:16.158Z] 190864.00 IOPS, 745.56 MiB/s 00:27:47.401 Latency(us) 00:27:47.401 [2024-10-08T18:55:16.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.401 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:27:47.401 null0 : 5.00 190710.64 744.96 0.00 0.00 333.17 205.78 1724.22 00:27:47.401 [2024-10-08T18:55:16.158Z] =================================================================================================================== 00:27:47.401 [2024-10-08T18:55:16.158Z] Total : 190710.64 744.96 0.00 0.00 333.17 205.78 1724.22 00:27:48.778 18:55:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:27:48.778 18:55:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:27:48.778 00:27:48.778 real 0m14.946s 00:27:48.778 user 0m11.397s 00:27:48.778 sys 0m3.324s 00:27:48.778 18:55:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.778 18:55:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:48.778 ************************************ 00:27:48.778 END TEST xnvme_bdevperf 00:27:48.778 ************************************ 00:27:48.778 00:27:48.778 real 1m0.986s 00:27:48.778 user 0m51.765s 00:27:48.778 sys 0m8.397s 00:27:48.778 18:55:17 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.778 ************************************ 00:27:48.778 END TEST nvme_xnvme 00:27:48.778 18:55:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:48.778 ************************************ 00:27:48.778 18:55:17 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:27:48.778 18:55:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:48.778 18:55:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:48.778 18:55:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.778 ************************************ 00:27:48.778 START TEST blockdev_xnvme 00:27:48.778 ************************************ 00:27:48.778 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:27:48.778 * Looking for test storage... 00:27:48.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:48.778 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:48.778 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:27:48.778 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:49.038 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:49.038 18:55:17 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:27:49.038 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:49.038 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.038 --rc genhtml_branch_coverage=1 00:27:49.038 --rc genhtml_function_coverage=1 00:27:49.038 --rc genhtml_legend=1 00:27:49.038 --rc geninfo_all_blocks=1 00:27:49.038 --rc geninfo_unexecuted_blocks=1 00:27:49.038 00:27:49.038 ' 00:27:49.038 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.038 --rc genhtml_branch_coverage=1 00:27:49.038 --rc genhtml_function_coverage=1 00:27:49.038 --rc genhtml_legend=1 00:27:49.038 --rc geninfo_all_blocks=1 00:27:49.038 --rc geninfo_unexecuted_blocks=1 00:27:49.038 00:27:49.038 ' 00:27:49.038 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.038 --rc genhtml_branch_coverage=1 00:27:49.038 --rc genhtml_function_coverage=1 00:27:49.038 --rc genhtml_legend=1 00:27:49.038 --rc geninfo_all_blocks=1 00:27:49.038 --rc geninfo_unexecuted_blocks=1 00:27:49.038 00:27:49.038 ' 00:27:49.038 18:55:17 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:49.038 --rc genhtml_branch_coverage=1 00:27:49.038 --rc genhtml_function_coverage=1 00:27:49.038 --rc genhtml_legend=1 00:27:49.038 --rc geninfo_all_blocks=1 00:27:49.038 --rc geninfo_unexecuted_blocks=1 00:27:49.038 00:27:49.038 ' 00:27:49.038 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71873 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:49.039 18:55:17 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71873 00:27:49.039 18:55:17 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 71873 ']' 00:27:49.039 18:55:17 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.039 18:55:17 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:49.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.039 18:55:17 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.039 18:55:17 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:49.039 18:55:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:49.039 [2024-10-08 18:55:17.722669] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:27:49.039 [2024-10-08 18:55:17.722847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71873 ] 00:27:49.298 [2024-10-08 18:55:17.916674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.556 [2024-10-08 18:55:18.140148] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.500 18:55:19 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:50.500 18:55:19 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:27:50.500 18:55:19 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:27:50.500 18:55:19 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:27:50.500 18:55:19 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:27:50.500 18:55:19 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:27:50.500 18:55:19 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:50.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:51.017 Waiting for block devices as requested 00:27:51.017 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:51.275 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:51.275 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:51.534 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:56.811 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:27:56.811 nvme0n1 00:27:56.811 nvme1n1 00:27:56.811 nvme2n1 00:27:56.811 nvme2n2 00:27:56.811 nvme2n3 00:27:56.811 nvme3n1 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:27:56.811 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.811 18:55:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:56.812 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.812 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:27:56.812 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:27:56.812 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1245b676-d6e3-4852-9152-b9aa86e8dcf0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1245b676-d6e3-4852-9152-b9aa86e8dcf0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "716f34eb-7065-4cbc-9469-d73a7ce85d1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "716f34eb-7065-4cbc-9469-d73a7ce85d1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2a6bbc73-e44d-45a0-a3fd-16f86f3485c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a6bbc73-e44d-45a0-a3fd-16f86f3485c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "92f68112-4c90-45d1-afe1-9fdb94265627"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92f68112-4c90-45d1-afe1-9fdb94265627",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "fd87dd81-6a22-4f83-a7eb-e1f60220f464"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd87dd81-6a22-4f83-a7eb-e1f60220f464",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5a3aead9-bfee-4a83-a087-55ff6fccb2ea"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5a3aead9-bfee-4a83-a087-55ff6fccb2ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:27:56.812 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:27:56.812 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:27:56.812 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:27:56.812 18:55:25 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71873 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 71873 ']' 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 71873 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71873 00:27:56.812 killing process with pid 71873 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71873' 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 71873 00:27:56.812 18:55:25 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 71873 00:28:00.107 18:55:28 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:00.107 18:55:28 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:28:00.107 18:55:28 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:00.107 18:55:28 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:00.107 18:55:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:00.107 ************************************ 00:28:00.107 START TEST bdev_hello_world 00:28:00.107 ************************************ 00:28:00.107 18:55:28 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:28:00.107 [2024-10-08 18:55:28.359604] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:00.107 [2024-10-08 18:55:28.359782] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72256 ] 00:28:00.107 [2024-10-08 18:55:28.544237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.107 [2024-10-08 18:55:28.769442] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.674 [2024-10-08 18:55:29.231727] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:00.674 [2024-10-08 18:55:29.231784] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:28:00.674 [2024-10-08 18:55:29.231806] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:00.674 [2024-10-08 18:55:29.234197] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:00.674 [2024-10-08 18:55:29.234578] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:00.674 [2024-10-08 18:55:29.234616] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:00.674 [2024-10-08 18:55:29.234747] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:00.674 00:28:00.674 [2024-10-08 18:55:29.234768] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:02.051 00:28:02.051 ************************************ 00:28:02.051 END TEST bdev_hello_world 00:28:02.051 ************************************ 00:28:02.051 real 0m2.318s 00:28:02.051 user 0m1.942s 00:28:02.051 sys 0m0.260s 00:28:02.051 18:55:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:02.051 18:55:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:02.051 18:55:30 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:28:02.051 18:55:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:02.051 18:55:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:02.051 18:55:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:02.051 ************************************ 00:28:02.051 START TEST bdev_bounds 00:28:02.051 ************************************ 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72298 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72298' 00:28:02.051 Process bdevio pid: 72298 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:02.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72298 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72298 ']' 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.051 18:55:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:02.051 [2024-10-08 18:55:30.714315] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:02.051 [2024-10-08 18:55:30.714651] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72298 ] 00:28:02.310 [2024-10-08 18:55:30.878535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:02.569 [2024-10-08 18:55:31.109586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.569 [2024-10-08 18:55:31.109732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.569 [2024-10-08 18:55:31.109753] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.135 18:55:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.135 18:55:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:28:03.135 18:55:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:03.135 I/O targets: 00:28:03.135 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:28:03.135 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:28:03.135 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:03.135 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:03.135 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:03.135 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:28:03.135 00:28:03.135 00:28:03.135 CUnit - A unit testing framework for C - Version 2.1-3 00:28:03.135 http://cunit.sourceforge.net/ 00:28:03.135 00:28:03.135 00:28:03.135 Suite: bdevio tests on: nvme3n1 00:28:03.135 Test: blockdev write read block ...passed 00:28:03.135 Test: blockdev write zeroes read block ...passed 00:28:03.135 Test: blockdev write zeroes read no split ...passed 00:28:03.135 Test: blockdev write zeroes read split ...passed 00:28:03.135 Test: blockdev write zeroes read split partial ...passed 00:28:03.135 Test: blockdev reset ...passed 00:28:03.135 Test: blockdev write read 8 blocks ...passed 00:28:03.135 Test: blockdev write read size > 128k ...passed 00:28:03.135 Test: blockdev write read invalid size ...passed 00:28:03.135 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:03.135 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:03.135 Test: blockdev write read max offset ...passed 00:28:03.135 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:03.135 Test: blockdev writev readv 8 blocks ...passed 00:28:03.135 Test: blockdev writev readv 30 x 1block ...passed 00:28:03.135 Test: blockdev writev readv block ...passed 00:28:03.135 Test: blockdev writev readv size > 128k ...passed 00:28:03.135 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:03.135 Test: blockdev comparev and writev ...passed 00:28:03.135 Test: blockdev nvme passthru rw ...passed 00:28:03.135 Test: blockdev nvme passthru vendor specific ...passed 00:28:03.135 Test: blockdev nvme admin passthru ...passed 00:28:03.135 Test: blockdev copy ...passed 00:28:03.135 Suite: bdevio tests on: nvme2n3 00:28:03.135 Test: blockdev write read block ...passed 00:28:03.135 Test: blockdev write zeroes read block ...passed 00:28:03.135 Test: blockdev write zeroes read no split ...passed 00:28:03.135 Test: blockdev write zeroes read split ...passed 00:28:03.135 Test: blockdev write zeroes read split partial ...passed 00:28:03.135 Test: blockdev reset ...passed 00:28:03.135 Test: blockdev write read 8 blocks ...passed 00:28:03.135 Test: blockdev write read size > 128k ...passed 00:28:03.135 Test: blockdev write read invalid size ...passed 00:28:03.135 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:03.135 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:03.135 Test: blockdev write read max offset ...passed 00:28:03.135 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:03.135 Test: blockdev writev readv 8 blocks ...passed 00:28:03.135 Test: blockdev writev readv 30 x 1block ...passed 00:28:03.135 Test: blockdev writev readv block ...passed 00:28:03.135 Test: blockdev writev readv size > 128k ...passed 00:28:03.135 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:03.135 Test: blockdev comparev and writev ...passed 00:28:03.135 Test: blockdev nvme passthru rw ...passed 00:28:03.135 Test: blockdev nvme passthru vendor specific ...passed 00:28:03.135 Test: blockdev nvme admin passthru ...passed 00:28:03.135 Test: blockdev copy ...passed 00:28:03.135 Suite: bdevio tests on: nvme2n2 00:28:03.135 Test: blockdev write read block ...passed 00:28:03.135 Test: blockdev write zeroes read block ...passed 00:28:03.135 Test: blockdev write zeroes read no split ...passed 00:28:03.394 Test: blockdev write zeroes read split ...passed 00:28:03.394 Test: blockdev write zeroes read split partial ...passed 00:28:03.394 Test: blockdev reset ...passed 00:28:03.394 Test: blockdev write read 8 blocks ...passed 00:28:03.394 Test: blockdev write read size > 128k ...passed 00:28:03.394 Test: blockdev write read invalid size ...passed 00:28:03.394 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:03.394 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:03.394 Test: blockdev write read max offset ...passed 00:28:03.394 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:03.394 Test: blockdev writev readv 8 blocks ...passed 00:28:03.394 Test: blockdev writev readv 30 x 1block ...passed 00:28:03.394 Test: blockdev writev readv block ...passed 00:28:03.394 Test: blockdev writev readv size > 128k ...passed 00:28:03.394 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:03.394 Test: blockdev comparev and writev ...passed 00:28:03.394 Test: blockdev nvme passthru rw ...passed 00:28:03.394 Test: blockdev nvme passthru vendor specific ...passed 00:28:03.394 Test: blockdev nvme admin passthru ...passed 00:28:03.394 Test: blockdev copy ...passed 00:28:03.394 Suite: bdevio tests on: nvme2n1 00:28:03.394 Test: blockdev write read block ...passed 00:28:03.394 Test: blockdev write zeroes read block ...passed 00:28:03.394 Test: blockdev write zeroes read no split ...passed 00:28:03.394 Test: blockdev write zeroes read split ...passed 00:28:03.394 Test: blockdev write zeroes read split partial ...passed 00:28:03.394 Test: blockdev reset ...passed 00:28:03.394 Test: blockdev write read 8 blocks ...passed 00:28:03.394 Test: blockdev write read size > 128k ...passed 00:28:03.394 Test: blockdev write read invalid size ...passed 00:28:03.394 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:03.394 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:03.394 Test: blockdev write read max offset ...passed 00:28:03.394 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:03.394 Test: blockdev writev readv 8 blocks ...passed 00:28:03.394 Test: blockdev writev readv 30 x 1block ...passed 00:28:03.394 Test: blockdev writev readv block ...passed 00:28:03.394 Test: blockdev writev readv size > 128k ...passed 00:28:03.394 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:03.394 Test: blockdev comparev and writev ...passed 00:28:03.394 Test: blockdev nvme passthru rw ...passed 00:28:03.394 Test: blockdev nvme passthru vendor specific ...passed 00:28:03.394 Test: blockdev nvme admin passthru ...passed 00:28:03.394 Test: blockdev copy ...passed 00:28:03.394 Suite: bdevio tests on: nvme1n1 00:28:03.394 Test: blockdev write read block ...passed 00:28:03.394 Test: blockdev write zeroes read block ...passed 00:28:03.394 Test: blockdev write zeroes read no split ...passed 00:28:03.394 Test: blockdev write zeroes read split ...passed 00:28:03.394 Test: blockdev write zeroes read split partial ...passed 00:28:03.394 Test: blockdev reset ...passed 00:28:03.394 Test: blockdev write read 8 blocks ...passed 00:28:03.394 Test: blockdev write read size > 128k ...passed 00:28:03.394 Test: blockdev write read invalid size ...passed 00:28:03.394 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:03.394 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:03.394 Test: blockdev write read max offset ...passed 00:28:03.394 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:03.394 Test: blockdev writev readv 8 blocks ...passed 00:28:03.394 Test: blockdev writev readv 30 x 1block ...passed 00:28:03.394 Test: blockdev writev readv block ...passed 00:28:03.394 Test: blockdev writev readv size > 128k ...passed 00:28:03.394 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:03.394 Test: blockdev comparev and writev ...passed 00:28:03.394 Test: blockdev nvme passthru rw ...passed 00:28:03.394 Test: blockdev nvme passthru vendor specific ...passed 00:28:03.394 Test: blockdev nvme admin passthru ...passed 00:28:03.394 Test: blockdev copy ...passed 00:28:03.394 Suite: bdevio tests on: nvme0n1 00:28:03.394 Test: blockdev write read block ...passed 00:28:03.394 Test: blockdev write zeroes read block ...passed 00:28:03.394 Test: blockdev write zeroes read no split ...passed 00:28:03.652 Test: blockdev write zeroes read split ...passed 00:28:03.652 Test: blockdev write zeroes read split partial ...passed 00:28:03.652 Test: blockdev reset ...passed 00:28:03.652 Test: blockdev write read 8 blocks ...passed 00:28:03.652 Test: blockdev write read size > 128k ...passed 00:28:03.652 Test: blockdev write read invalid size ...passed 00:28:03.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:03.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:03.652 Test: blockdev write read max offset ...passed 00:28:03.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:03.652 Test: blockdev writev readv 8 blocks ...passed 00:28:03.652 Test: blockdev writev readv 30 x 1block ...passed 00:28:03.652 Test: blockdev writev readv block ...passed 00:28:03.652 Test: blockdev writev readv size > 128k ...passed 00:28:03.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:03.652 Test: blockdev comparev and writev ...passed 00:28:03.652 Test: blockdev nvme passthru rw ...passed 00:28:03.652 Test: blockdev nvme passthru vendor specific ...passed 00:28:03.652 Test: blockdev nvme admin passthru ...passed 00:28:03.652 Test: blockdev copy ...passed 00:28:03.652 00:28:03.652 Run Summary: Type Total Ran Passed Failed Inactive 00:28:03.652 suites 6 6 n/a 0 0 00:28:03.652 tests 138 138 138 0 0 00:28:03.652 asserts 780 780 780 0 n/a 00:28:03.652 00:28:03.652 Elapsed time = 1.418 seconds 00:28:03.652 0 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72298 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72298 ']' 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72298 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72298 00:28:03.652 killing process with pid 72298 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72298' 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72298 00:28:03.652 18:55:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72298 00:28:05.028 ************************************ 00:28:05.028 END TEST bdev_bounds 00:28:05.028 ************************************ 00:28:05.028 18:55:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:28:05.028 00:28:05.028 real 0m2.983s 00:28:05.028 user 0m7.019s 00:28:05.028 sys 0m0.415s 00:28:05.028 18:55:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:05.028 18:55:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:05.028 18:55:33 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:28:05.028 18:55:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:05.028 18:55:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:05.028 18:55:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:05.028 ************************************ 00:28:05.028 START TEST bdev_nbd 00:28:05.028 ************************************ 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72365 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72365 /var/tmp/spdk-nbd.sock 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72365 ']' 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:05.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:05.028 18:55:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:05.286 [2024-10-08 18:55:33.797475] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:05.287 [2024-10-08 18:55:33.798121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.287 [2024-10-08 18:55:33.985802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.545 [2024-10-08 18:55:34.216328] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:06.113 18:55:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:06.372 1+0 records in 00:28:06.372 1+0 records out 00:28:06.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623621 s, 6.6 MB/s 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:06.372 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:06.631 1+0 records in 00:28:06.631 1+0 records out 00:28:06.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457865 s, 8.9 MB/s 00:28:06.631 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:06.890 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:06.890 1+0 records in 00:28:06.890 1+0 records out 00:28:06.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626152 s, 6.5 MB/s 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:07.150 1+0 records in 00:28:07.150 1+0 records out 00:28:07.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564742 s, 7.3 MB/s 00:28:07.150 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.409 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:07.409 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.409 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:07.409 18:55:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:07.409 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:07.409 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:07.409 18:55:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:07.667 1+0 records in 00:28:07.667 1+0 records out 00:28:07.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974917 s, 4.2 MB/s 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:07.667 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:07.926 1+0 records in 00:28:07.926 1+0 records out 00:28:07.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735032 s, 5.6 MB/s 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:07.926 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd0", 00:28:08.185 "bdev_name": "nvme0n1" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd1", 00:28:08.185 "bdev_name": "nvme1n1" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd2", 00:28:08.185 "bdev_name": "nvme2n1" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd3", 00:28:08.185 "bdev_name": "nvme2n2" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd4", 00:28:08.185 "bdev_name": "nvme2n3" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd5", 00:28:08.185 "bdev_name": "nvme3n1" 00:28:08.185 } 00:28:08.185 ]' 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd0", 00:28:08.185 "bdev_name": "nvme0n1" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd1", 00:28:08.185 "bdev_name": "nvme1n1" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd2", 00:28:08.185 "bdev_name": "nvme2n1" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd3", 00:28:08.185 "bdev_name": "nvme2n2" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd4", 00:28:08.185 "bdev_name": "nvme2n3" 00:28:08.185 }, 00:28:08.185 { 00:28:08.185 "nbd_device": "/dev/nbd5", 00:28:08.185 "bdev_name": "nvme3n1" 00:28:08.185 } 00:28:08.185 ]' 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:08.185 18:55:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:08.502 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:08.761 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:09.020 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:09.279 18:55:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:09.538 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:28:09.795 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:09.796 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:09.796 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:09.796 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:09.796 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:09.796 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:09.796 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:28:10.053 /dev/nbd0 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:10.053 1+0 records in 00:28:10.053 1+0 records out 00:28:10.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382836 s, 10.7 MB/s 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:10.053 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:28:10.313 /dev/nbd1 00:28:10.313 18:55:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:10.313 1+0 records in 00:28:10.313 1+0 records out 00:28:10.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567744 s, 7.2 MB/s 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:10.313 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:28:10.880 /dev/nbd10 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:10.880 1+0 records in 00:28:10.880 1+0 records out 00:28:10.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695949 s, 5.9 MB/s 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:28:10.880 /dev/nbd11 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:10.880 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:10.881 1+0 records in 00:28:10.881 1+0 records out 00:28:10.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683917 s, 6.0 MB/s 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:10.881 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:28:11.139 /dev/nbd12 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:11.398 1+0 records in 00:28:11.398 1+0 records out 00:28:11.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728771 s, 5.6 MB/s 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:11.398 18:55:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:28:11.657 /dev/nbd13 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:11.657 1+0 records in 00:28:11.657 1+0 records out 00:28:11.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760237 s, 5.4 MB/s 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:11.657 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:11.916 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:11.916 { 00:28:11.916 "nbd_device": "/dev/nbd0", 00:28:11.917 "bdev_name": "nvme0n1" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd1", 00:28:11.917 "bdev_name": "nvme1n1" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd10", 00:28:11.917 "bdev_name": "nvme2n1" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd11", 00:28:11.917 "bdev_name": "nvme2n2" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd12", 00:28:11.917 "bdev_name": "nvme2n3" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd13", 00:28:11.917 "bdev_name": "nvme3n1" 00:28:11.917 } 00:28:11.917 ]' 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd0", 00:28:11.917 "bdev_name": "nvme0n1" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd1", 00:28:11.917 "bdev_name": "nvme1n1" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd10", 00:28:11.917 "bdev_name": "nvme2n1" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd11", 00:28:11.917 "bdev_name": "nvme2n2" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd12", 00:28:11.917 "bdev_name": "nvme2n3" 00:28:11.917 }, 00:28:11.917 { 00:28:11.917 "nbd_device": "/dev/nbd13", 00:28:11.917 "bdev_name": "nvme3n1" 00:28:11.917 } 00:28:11.917 ]' 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:11.917 /dev/nbd1 00:28:11.917 /dev/nbd10 00:28:11.917 /dev/nbd11 00:28:11.917 /dev/nbd12 00:28:11.917 /dev/nbd13' 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:11.917 /dev/nbd1 00:28:11.917 /dev/nbd10 00:28:11.917 /dev/nbd11 00:28:11.917 /dev/nbd12 00:28:11.917 /dev/nbd13' 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:11.917 256+0 records in 00:28:11.917 256+0 records out 00:28:11.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00741029 s, 142 MB/s 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:11.917 256+0 records in 00:28:11.917 256+0 records out 00:28:11.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123855 s, 8.5 MB/s 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:11.917 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:12.175 256+0 records in 00:28:12.175 256+0 records out 00:28:12.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145297 s, 7.2 MB/s 00:28:12.175 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:12.175 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:28:12.175 256+0 records in 00:28:12.175 256+0 records out 00:28:12.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128585 s, 8.2 MB/s 00:28:12.175 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:12.175 18:55:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:28:12.434 256+0 records in 00:28:12.434 256+0 records out 00:28:12.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128883 s, 8.1 MB/s 00:28:12.434 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:12.434 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:28:12.434 256+0 records in 00:28:12.434 256+0 records out 00:28:12.434 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12592 s, 8.3 MB/s 00:28:12.434 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:12.434 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:28:12.694 256+0 records in 00:28:12.694 256+0 records out 00:28:12.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131364 s, 8.0 MB/s 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:12.694 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:13.262 18:55:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:13.521 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:13.779 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:14.037 18:55:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:14.295 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:28:14.863 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:15.122 malloc_lvol_verify 00:28:15.122 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:15.380 f55adf57-6508-4856-a97d-a31281b1044f 00:28:15.380 18:55:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:15.638 a8fb81e7-bbbd-4fad-a3e2-5115e79d3237 00:28:15.638 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:15.897 /dev/nbd0 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:28:15.897 mke2fs 1.47.0 (5-Feb-2023) 00:28:15.897 Discarding device blocks: 0/4096 done 00:28:15.897 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:15.897 00:28:15.897 Allocating group tables: 0/1 done 00:28:15.897 Writing inode tables: 0/1 done 00:28:15.897 Creating journal (1024 blocks): done 00:28:15.897 Writing superblocks and filesystem accounting information: 0/1 done 00:28:15.897 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:15.897 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72365 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72365 ']' 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72365 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72365 00:28:16.156 killing process with pid 72365 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72365' 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72365 00:28:16.156 18:55:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72365 00:28:17.545 18:55:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:28:17.545 00:28:17.545 real 0m12.631s 00:28:17.545 user 0m16.659s 00:28:17.545 sys 0m5.040s 00:28:17.545 ************************************ 00:28:17.545 END TEST bdev_nbd 00:28:17.545 ************************************ 00:28:17.545 18:55:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:17.545 18:55:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:17.805 18:55:46 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:28:17.805 18:55:46 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:28:17.805 18:55:46 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:28:17.805 18:55:46 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:28:17.805 18:55:46 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:17.805 18:55:46 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:17.805 18:55:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:17.805 ************************************ 00:28:17.805 START TEST bdev_fio 00:28:17.805 ************************************ 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:28:17.805 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:17.805 ************************************ 00:28:17.805 START TEST bdev_fio_rw_verify 00:28:17.805 ************************************ 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:28:17.805 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:17.806 18:55:46 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:28:18.064 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:18.064 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:18.064 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:18.064 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:18.064 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:18.064 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:28:18.064 fio-3.35 00:28:18.064 Starting 6 threads 00:28:30.294 00:28:30.294 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72792: Tue Oct 8 18:55:57 2024 00:28:30.294 read: IOPS=28.1k, BW=110MiB/s (115MB/s)(1096MiB/10001msec) 00:28:30.294 slat (usec): min=2, max=1364, avg= 7.20, stdev= 6.57 00:28:30.294 clat (usec): min=129, max=8062, avg=669.80, stdev=276.44 00:28:30.294 lat (usec): min=132, max=8066, avg=677.00, stdev=277.38 00:28:30.294 clat percentiles (usec): 00:28:30.294 | 50.000th=[ 685], 99.000th=[ 1385], 99.900th=[ 2212], 99.990th=[ 6652], 00:28:30.294 | 99.999th=[ 8029] 00:28:30.294 write: IOPS=28.4k, BW=111MiB/s (116MB/s)(1110MiB/10001msec); 0 zone resets 00:28:30.294 slat (usec): min=8, max=3799, avg=26.30, stdev=32.15 00:28:30.294 clat (usec): min=97, max=7038, avg=763.57, stdev=279.75 00:28:30.294 lat (usec): min=123, max=7060, avg=789.87, stdev=282.52 00:28:30.294 clat percentiles (usec): 00:28:30.294 | 50.000th=[ 766], 99.000th=[ 1516], 99.900th=[ 2212], 99.990th=[ 3097], 00:28:30.294 | 99.999th=[ 6980] 00:28:30.294 bw ( KiB/s): min=96016, max=144864, per=99.71%, avg=113310.11, stdev=2438.97, samples=114 00:28:30.294 iops : min=24004, max=36216, avg=28327.42, stdev=609.73, samples=114 00:28:30.294 lat (usec) : 100=0.01%, 250=3.34%, 500=18.83%, 750=32.17%, 1000=33.72% 00:28:30.294 lat (msec) : 2=11.79%, 4=0.15%, 10=0.01% 00:28:30.294 cpu : usr=55.92%, sys=29.93%, ctx=7352, majf=0, minf=24064 00:28:30.294 IO depths : 1=12.1%, 2=24.6%, 4=50.4%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:30.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.294 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:30.294 issued rwts: total=280674,284141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:30.294 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:30.294 00:28:30.294 Run status group 0 (all jobs): 00:28:30.294 READ: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1096MiB (1150MB), run=10001-10001msec 00:28:30.294 WRITE: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=1110MiB (1164MB), run=10001-10001msec 00:28:30.553 ----------------------------------------------------- 00:28:30.553 Suppressions used: 00:28:30.553 count bytes template 00:28:30.553 6 48 /usr/src/fio/parse.c 00:28:30.553 3269 313824 /usr/src/fio/iolog.c 00:28:30.553 1 8 libtcmalloc_minimal.so 00:28:30.553 1 904 libcrypto.so 00:28:30.553 ----------------------------------------------------- 00:28:30.553 00:28:30.553 00:28:30.553 real 0m12.748s 00:28:30.553 user 0m35.764s 00:28:30.553 sys 0m18.395s 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:28:30.553 ************************************ 00:28:30.553 END TEST bdev_fio_rw_verify 00:28:30.553 ************************************ 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1245b676-d6e3-4852-9152-b9aa86e8dcf0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1245b676-d6e3-4852-9152-b9aa86e8dcf0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "716f34eb-7065-4cbc-9469-d73a7ce85d1a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "716f34eb-7065-4cbc-9469-d73a7ce85d1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2a6bbc73-e44d-45a0-a3fd-16f86f3485c3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a6bbc73-e44d-45a0-a3fd-16f86f3485c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "92f68112-4c90-45d1-afe1-9fdb94265627"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92f68112-4c90-45d1-afe1-9fdb94265627",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "fd87dd81-6a22-4f83-a7eb-e1f60220f464"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd87dd81-6a22-4f83-a7eb-e1f60220f464",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "5a3aead9-bfee-4a83-a087-55ff6fccb2ea"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5a3aead9-bfee-4a83-a087-55ff6fccb2ea",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:28:30.553 /home/vagrant/spdk_repo/spdk 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:28:30.553 00:28:30.553 real 0m12.934s 00:28:30.553 user 0m35.860s 00:28:30.553 sys 0m18.491s 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:30.553 ************************************ 00:28:30.553 END TEST bdev_fio 00:28:30.553 ************************************ 00:28:30.553 18:55:59 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:28:30.854 18:55:59 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:30.854 18:55:59 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:30.854 18:55:59 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:28:30.854 18:55:59 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:30.854 18:55:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:30.854 ************************************ 00:28:30.854 START TEST bdev_verify 00:28:30.854 ************************************ 00:28:30.854 18:55:59 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:30.854 [2024-10-08 18:55:59.471515] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:30.854 [2024-10-08 18:55:59.471682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72969 ] 00:28:31.115 [2024-10-08 18:55:59.663370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:31.373 [2024-10-08 18:55:59.970326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.373 [2024-10-08 18:55:59.970334] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.941 Running I/O for 5 seconds... 00:28:34.261 22784.00 IOPS, 89.00 MiB/s [2024-10-08T18:56:03.953Z] 23280.00 IOPS, 90.94 MiB/s [2024-10-08T18:56:04.890Z] 22986.67 IOPS, 89.79 MiB/s [2024-10-08T18:56:05.824Z] 22736.00 IOPS, 88.81 MiB/s 00:28:37.067 Latency(us) 00:28:37.067 [2024-10-08T18:56:05.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.068 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x0 length 0xa0000 00:28:37.068 nvme0n1 : 5.05 1647.46 6.44 0.00 0.00 77545.97 11047.50 88379.98 00:28:37.068 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0xa0000 length 0xa0000 00:28:37.068 nvme0n1 : 5.07 1615.60 6.31 0.00 0.00 79076.69 12670.29 81389.47 00:28:37.068 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x0 length 0xbd0bd 00:28:37.068 nvme1n1 : 5.05 2850.81 11.14 0.00 0.00 44631.61 5242.88 70404.39 00:28:37.068 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:28:37.068 nvme1n1 : 5.06 2757.78 10.77 0.00 0.00 46146.16 5024.43 66909.14 00:28:37.068 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x0 length 0x80000 00:28:37.068 nvme2n1 : 5.04 1649.48 6.44 0.00 0.00 77059.12 11109.91 85384.05 00:28:37.068 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x80000 length 0x80000 00:28:37.068 nvme2n1 : 5.06 1620.12 6.33 0.00 0.00 78379.20 9549.53 74898.29 00:28:37.068 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x0 length 0x80000 00:28:37.068 nvme2n2 : 5.05 1646.39 6.43 0.00 0.00 77042.77 7864.32 72901.00 00:28:37.068 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x80000 length 0x80000 00:28:37.068 nvme2n2 : 5.06 1618.56 6.32 0.00 0.00 78289.41 10485.76 72401.68 00:28:37.068 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x0 length 0x80000 00:28:37.068 nvme2n3 : 5.05 1645.92 6.43 0.00 0.00 76910.73 8550.89 75896.93 00:28:37.068 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x80000 length 0x80000 00:28:37.068 nvme2n3 : 5.06 1617.92 6.32 0.00 0.00 78167.92 11484.40 66409.81 00:28:37.068 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x0 length 0x20000 00:28:37.068 nvme3n1 : 5.06 1668.21 6.52 0.00 0.00 75726.76 1357.53 85883.37 00:28:37.068 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.068 Verification LBA range: start 0x20000 length 0x20000 00:28:37.068 nvme3n1 : 5.08 1638.14 6.40 0.00 0.00 77058.76 2808.69 76396.25 00:28:37.068 [2024-10-08T18:56:05.825Z] =================================================================================================================== 00:28:37.068 [2024-10-08T18:56:05.825Z] Total : 21976.39 85.85 0.00 0.00 69320.13 1357.53 88379.98 00:28:38.443 00:28:38.443 real 0m7.733s 00:28:38.443 user 0m11.864s 00:28:38.443 sys 0m1.986s 00:28:38.443 18:56:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:38.443 18:56:07 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:38.443 ************************************ 00:28:38.443 END TEST bdev_verify 00:28:38.443 ************************************ 00:28:38.443 18:56:07 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:38.443 18:56:07 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:28:38.443 18:56:07 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:38.443 18:56:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:38.443 ************************************ 00:28:38.443 START TEST bdev_verify_big_io 00:28:38.443 ************************************ 00:28:38.443 18:56:07 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:38.701 [2024-10-08 18:56:07.220400] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:38.701 [2024-10-08 18:56:07.220518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73082 ] 00:28:38.701 [2024-10-08 18:56:07.384264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:38.960 [2024-10-08 18:56:07.601923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.960 [2024-10-08 18:56:07.601992] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.527 Running I/O for 5 seconds... 00:28:45.359 1471.00 IOPS, 91.94 MiB/s [2024-10-08T18:56:14.373Z] 3501.50 IOPS, 218.84 MiB/s [2024-10-08T18:56:14.373Z] 3228.67 IOPS, 201.79 MiB/s 00:28:45.616 Latency(us) 00:28:45.616 [2024-10-08T18:56:14.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.616 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x0 length 0xa000 00:28:45.616 nvme0n1 : 5.85 131.25 8.20 0.00 0.00 941578.73 236678.58 882801.13 00:28:45.616 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0xa000 length 0xa000 00:28:45.616 nvme0n1 : 5.86 124.16 7.76 0.00 0.00 1013359.92 21470.84 1893428.66 00:28:45.616 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x0 length 0xbd0b 00:28:45.616 nvme1n1 : 5.80 140.70 8.79 0.00 0.00 842753.49 19099.06 1757613.10 00:28:45.616 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0xbd0b length 0xbd0b 00:28:45.616 nvme1n1 : 5.85 169.12 10.57 0.00 0.00 710883.84 11734.06 735001.84 00:28:45.616 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x0 length 0x8000 00:28:45.616 nvme2n1 : 5.86 161.19 10.07 0.00 0.00 723738.34 19848.05 906768.58 00:28:45.616 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x8000 length 0x8000 00:28:45.616 nvme2n1 : 5.87 119.90 7.49 0.00 0.00 992920.05 35951.18 1158426.82 00:28:45.616 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x0 length 0x8000 00:28:45.616 nvme2n2 : 5.88 107.54 6.72 0.00 0.00 1062999.06 114344.72 2428701.74 00:28:45.616 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x8000 length 0x8000 00:28:45.616 nvme2n2 : 5.87 128.02 8.00 0.00 0.00 905654.07 36949.82 1565873.49 00:28:45.616 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x0 length 0x8000 00:28:45.616 nvme2n3 : 5.87 145.91 9.12 0.00 0.00 768613.62 59918.63 1198372.57 00:28:45.616 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x8000 length 0x8000 00:28:45.616 nvme2n3 : 5.86 139.30 8.71 0.00 0.00 812043.63 15853.47 1637775.85 00:28:45.616 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x0 length 0x2000 00:28:45.616 nvme3n1 : 5.87 169.01 10.56 0.00 0.00 649573.88 5430.13 1262285.78 00:28:45.616 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:45.616 Verification LBA range: start 0x2000 length 0x2000 00:28:45.616 nvme3n1 : 5.88 149.74 9.36 0.00 0.00 734536.21 21346.01 1933374.42 00:28:45.616 [2024-10-08T18:56:14.373Z] =================================================================================================================== 00:28:45.616 [2024-10-08T18:56:14.373Z] Total : 1685.82 105.36 0.00 0.00 830000.23 5430.13 2428701.74 00:28:47.516 00:28:47.516 real 0m8.634s 00:28:47.516 user 0m15.402s 00:28:47.516 sys 0m0.656s 00:28:47.516 18:56:15 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:47.516 18:56:15 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:28:47.516 ************************************ 00:28:47.516 END TEST bdev_verify_big_io 00:28:47.516 ************************************ 00:28:47.516 18:56:15 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:47.516 18:56:15 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:47.516 18:56:15 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:47.516 18:56:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:47.516 ************************************ 00:28:47.516 START TEST bdev_write_zeroes 00:28:47.516 ************************************ 00:28:47.516 18:56:15 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:47.516 [2024-10-08 18:56:15.942259] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:47.516 [2024-10-08 18:56:15.942430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73194 ] 00:28:47.516 [2024-10-08 18:56:16.127985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.775 [2024-10-08 18:56:16.354707] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.342 Running I/O for 1 seconds... 00:28:49.276 63616.00 IOPS, 248.50 MiB/s 00:28:49.276 Latency(us) 00:28:49.276 [2024-10-08T18:56:18.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.276 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.276 nvme0n1 : 1.02 9374.84 36.62 0.00 0.00 13641.17 7177.75 27712.37 00:28:49.276 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.276 nvme1n1 : 1.03 15974.95 62.40 0.00 0.00 7998.65 4369.07 24841.26 00:28:49.277 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.277 nvme2n1 : 1.03 9346.69 36.51 0.00 0.00 13584.87 7146.54 25340.59 00:28:49.277 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.277 nvme2n2 : 1.03 9333.26 36.46 0.00 0.00 13593.70 7115.34 25715.08 00:28:49.277 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.277 nvme2n3 : 1.03 9319.75 36.41 0.00 0.00 13602.97 7115.34 26214.40 00:28:49.277 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.277 nvme3n1 : 1.03 9306.33 36.35 0.00 0.00 13613.77 7146.54 26588.89 00:28:49.277 [2024-10-08T18:56:18.034Z] =================================================================================================================== 00:28:49.277 [2024-10-08T18:56:18.034Z] Total : 62655.83 244.75 0.00 0.00 12180.05 4369.07 27712.37 00:28:50.650 00:28:50.650 real 0m3.427s 00:28:50.650 user 0m2.564s 00:28:50.650 sys 0m0.690s 00:28:50.650 18:56:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:50.650 18:56:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:28:50.650 ************************************ 00:28:50.650 END TEST bdev_write_zeroes 00:28:50.650 ************************************ 00:28:50.650 18:56:19 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:50.650 18:56:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:50.650 18:56:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:50.650 18:56:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:50.650 ************************************ 00:28:50.650 START TEST bdev_json_nonenclosed 00:28:50.650 ************************************ 00:28:50.650 18:56:19 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:50.908 [2024-10-08 18:56:19.433323] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:50.908 [2024-10-08 18:56:19.433481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73255 ] 00:28:50.908 [2024-10-08 18:56:19.616539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.167 [2024-10-08 18:56:19.839739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.167 [2024-10-08 18:56:19.839845] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:51.167 [2024-10-08 18:56:19.839870] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:51.167 [2024-10-08 18:56:19.839883] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:51.733 00:28:51.733 real 0m0.956s 00:28:51.733 user 0m0.692s 00:28:51.733 sys 0m0.157s 00:28:51.733 18:56:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:51.733 18:56:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:28:51.733 ************************************ 00:28:51.733 END TEST bdev_json_nonenclosed 00:28:51.733 ************************************ 00:28:51.733 18:56:20 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:51.733 18:56:20 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:28:51.733 18:56:20 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:51.733 18:56:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:51.733 ************************************ 00:28:51.733 START TEST bdev_json_nonarray 00:28:51.733 ************************************ 00:28:51.733 18:56:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:51.733 [2024-10-08 18:56:20.456052] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:51.733 [2024-10-08 18:56:20.456216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:28:51.991 [2024-10-08 18:56:20.643318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.248 [2024-10-08 18:56:20.858395] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.248 [2024-10-08 18:56:20.858484] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:52.248 [2024-10-08 18:56:20.858506] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:52.248 [2024-10-08 18:56:20.858518] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:52.814 00:28:52.814 real 0m0.958s 00:28:52.814 user 0m0.677s 00:28:52.814 sys 0m0.173s 00:28:52.814 18:56:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:52.814 18:56:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:28:52.814 ************************************ 00:28:52.814 END TEST bdev_json_nonarray 00:28:52.814 ************************************ 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:28:52.814 18:56:21 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:53.380 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:53.945 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:53.946 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:54.204 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:28:54.204 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:28:54.204 00:28:54.204 real 1m5.546s 00:28:54.204 user 1m44.547s 00:28:54.204 sys 0m31.282s 00:28:54.204 18:56:22 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.204 ************************************ 00:28:54.204 END TEST blockdev_xnvme 00:28:54.204 18:56:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:28:54.204 ************************************ 00:28:54.463 18:56:22 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:28:54.463 18:56:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:54.463 18:56:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.463 18:56:22 -- common/autotest_common.sh@10 -- # set +x 00:28:54.463 ************************************ 00:28:54.463 START TEST ublk 00:28:54.463 ************************************ 00:28:54.463 18:56:22 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:28:54.463 * Looking for test storage... 00:28:54.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:28:54.463 18:56:23 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:54.463 18:56:23 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:28:54.463 18:56:23 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:54.463 18:56:23 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:54.463 18:56:23 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.463 18:56:23 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.463 18:56:23 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.463 18:56:23 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.463 18:56:23 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.463 18:56:23 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.463 18:56:23 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.463 18:56:23 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.463 18:56:23 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.463 18:56:23 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.463 18:56:23 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.463 18:56:23 ublk -- scripts/common.sh@344 -- # case "$op" in 00:28:54.463 18:56:23 ublk -- scripts/common.sh@345 -- # : 1 00:28:54.463 18:56:23 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.463 18:56:23 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.463 18:56:23 ublk -- scripts/common.sh@365 -- # decimal 1 00:28:54.463 18:56:23 ublk -- scripts/common.sh@353 -- # local d=1 00:28:54.463 18:56:23 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.463 18:56:23 ublk -- scripts/common.sh@355 -- # echo 1 00:28:54.463 18:56:23 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.463 18:56:23 ublk -- scripts/common.sh@366 -- # decimal 2 00:28:54.722 18:56:23 ublk -- scripts/common.sh@353 -- # local d=2 00:28:54.722 18:56:23 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.722 18:56:23 ublk -- scripts/common.sh@355 -- # echo 2 00:28:54.722 18:56:23 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.722 18:56:23 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.722 18:56:23 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.722 18:56:23 ublk -- scripts/common.sh@368 -- # return 0 00:28:54.722 18:56:23 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.722 18:56:23 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:54.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.722 --rc genhtml_branch_coverage=1 00:28:54.722 --rc genhtml_function_coverage=1 00:28:54.722 --rc genhtml_legend=1 00:28:54.722 --rc geninfo_all_blocks=1 00:28:54.722 --rc geninfo_unexecuted_blocks=1 00:28:54.722 00:28:54.722 ' 00:28:54.722 18:56:23 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:54.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.722 --rc genhtml_branch_coverage=1 00:28:54.722 --rc genhtml_function_coverage=1 00:28:54.722 --rc genhtml_legend=1 00:28:54.722 --rc geninfo_all_blocks=1 00:28:54.722 --rc geninfo_unexecuted_blocks=1 00:28:54.722 00:28:54.722 ' 00:28:54.722 18:56:23 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:54.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.722 --rc genhtml_branch_coverage=1 00:28:54.722 --rc genhtml_function_coverage=1 00:28:54.722 --rc genhtml_legend=1 00:28:54.722 --rc geninfo_all_blocks=1 00:28:54.722 --rc geninfo_unexecuted_blocks=1 00:28:54.722 00:28:54.722 ' 00:28:54.722 18:56:23 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:54.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.722 --rc genhtml_branch_coverage=1 00:28:54.722 --rc genhtml_function_coverage=1 00:28:54.722 --rc genhtml_legend=1 00:28:54.722 --rc geninfo_all_blocks=1 00:28:54.722 --rc geninfo_unexecuted_blocks=1 00:28:54.722 00:28:54.722 ' 00:28:54.722 18:56:23 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:28:54.722 18:56:23 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:28:54.722 18:56:23 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:28:54.722 18:56:23 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:28:54.722 18:56:23 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:28:54.722 18:56:23 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:28:54.722 18:56:23 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:28:54.723 18:56:23 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:28:54.723 18:56:23 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:28:54.723 18:56:23 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:28:54.723 18:56:23 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:54.723 18:56:23 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.723 18:56:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:28:54.723 ************************************ 00:28:54.723 START TEST test_save_ublk_config 00:28:54.723 ************************************ 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:28:54.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73583 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73583 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 73583 ']' 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.723 18:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:54.723 [2024-10-08 18:56:23.397414] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:28:54.723 [2024-10-08 18:56:23.397598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73583 ] 00:28:54.981 [2024-10-08 18:56:23.584797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.239 [2024-10-08 18:56:23.855589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:56.254 [2024-10-08 18:56:24.744979] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:56.254 [2024-10-08 18:56:24.746108] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:56.254 malloc0 00:28:56.254 [2024-10-08 18:56:24.832156] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:28:56.254 [2024-10-08 18:56:24.832283] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:28:56.254 [2024-10-08 18:56:24.832297] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:28:56.254 [2024-10-08 18:56:24.832310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:28:56.254 [2024-10-08 18:56:24.840019] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:28:56.254 [2024-10-08 18:56:24.840044] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:28:56.254 [2024-10-08 18:56:24.847991] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:28:56.254 [2024-10-08 18:56:24.848097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:28:56.254 [2024-10-08 18:56:24.865006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:28:56.254 0 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.254 18:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:28:56.513 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.513 18:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:28:56.513 "subsystems": [ 00:28:56.513 { 00:28:56.513 "subsystem": "fsdev", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "fsdev_set_opts", 00:28:56.513 "params": { 00:28:56.513 "fsdev_io_pool_size": 65535, 00:28:56.513 "fsdev_io_cache_size": 256 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "keyring", 00:28:56.513 "config": [] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "iobuf", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "iobuf_set_options", 00:28:56.513 "params": { 00:28:56.513 "small_pool_count": 8192, 00:28:56.513 "large_pool_count": 1024, 00:28:56.513 "small_bufsize": 8192, 00:28:56.513 "large_bufsize": 135168 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "sock", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "sock_set_default_impl", 00:28:56.513 "params": { 00:28:56.513 "impl_name": "posix" 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "sock_impl_set_options", 00:28:56.513 "params": { 00:28:56.513 "impl_name": "ssl", 00:28:56.513 "recv_buf_size": 4096, 00:28:56.513 "send_buf_size": 4096, 00:28:56.513 "enable_recv_pipe": true, 00:28:56.513 "enable_quickack": false, 00:28:56.513 "enable_placement_id": 0, 00:28:56.513 "enable_zerocopy_send_server": true, 00:28:56.513 "enable_zerocopy_send_client": false, 00:28:56.513 "zerocopy_threshold": 0, 00:28:56.513 "tls_version": 0, 00:28:56.513 "enable_ktls": false 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "sock_impl_set_options", 00:28:56.513 "params": { 00:28:56.513 "impl_name": "posix", 00:28:56.513 "recv_buf_size": 2097152, 00:28:56.513 "send_buf_size": 2097152, 00:28:56.513 "enable_recv_pipe": true, 00:28:56.513 "enable_quickack": false, 00:28:56.513 "enable_placement_id": 0, 00:28:56.513 "enable_zerocopy_send_server": true, 00:28:56.513 "enable_zerocopy_send_client": false, 00:28:56.513 "zerocopy_threshold": 0, 00:28:56.513 "tls_version": 0, 00:28:56.513 "enable_ktls": false 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "vmd", 00:28:56.513 "config": [] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "accel", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "accel_set_options", 00:28:56.513 "params": { 00:28:56.513 "small_cache_size": 128, 00:28:56.513 "large_cache_size": 16, 00:28:56.513 "task_count": 2048, 00:28:56.513 "sequence_count": 2048, 00:28:56.513 "buf_count": 2048 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "bdev", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "bdev_set_options", 00:28:56.513 "params": { 00:28:56.513 "bdev_io_pool_size": 65535, 00:28:56.513 "bdev_io_cache_size": 256, 00:28:56.513 "bdev_auto_examine": true, 00:28:56.513 "iobuf_small_cache_size": 128, 00:28:56.513 "iobuf_large_cache_size": 16 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_raid_set_options", 00:28:56.513 "params": { 00:28:56.513 "process_window_size_kb": 1024, 00:28:56.513 "process_max_bandwidth_mb_sec": 0 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_iscsi_set_options", 00:28:56.513 "params": { 00:28:56.513 "timeout_sec": 30 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_nvme_set_options", 00:28:56.513 "params": { 00:28:56.513 "action_on_timeout": "none", 00:28:56.513 "timeout_us": 0, 00:28:56.513 "timeout_admin_us": 0, 00:28:56.513 "keep_alive_timeout_ms": 10000, 00:28:56.513 "arbitration_burst": 0, 00:28:56.513 "low_priority_weight": 0, 00:28:56.513 "medium_priority_weight": 0, 00:28:56.513 "high_priority_weight": 0, 00:28:56.513 "nvme_adminq_poll_period_us": 10000, 00:28:56.513 "nvme_ioq_poll_period_us": 0, 00:28:56.513 "io_queue_requests": 0, 00:28:56.513 "delay_cmd_submit": true, 00:28:56.513 "transport_retry_count": 4, 00:28:56.513 "bdev_retry_count": 3, 00:28:56.513 "transport_ack_timeout": 0, 00:28:56.513 "ctrlr_loss_timeout_sec": 0, 00:28:56.513 "reconnect_delay_sec": 0, 00:28:56.513 "fast_io_fail_timeout_sec": 0, 00:28:56.513 "disable_auto_failback": false, 00:28:56.513 "generate_uuids": false, 00:28:56.513 "transport_tos": 0, 00:28:56.513 "nvme_error_stat": false, 00:28:56.513 "rdma_srq_size": 0, 00:28:56.513 "io_path_stat": false, 00:28:56.513 "allow_accel_sequence": false, 00:28:56.513 "rdma_max_cq_size": 0, 00:28:56.513 "rdma_cm_event_timeout_ms": 0, 00:28:56.513 "dhchap_digests": [ 00:28:56.513 "sha256", 00:28:56.513 "sha384", 00:28:56.513 "sha512" 00:28:56.513 ], 00:28:56.513 "dhchap_dhgroups": [ 00:28:56.513 "null", 00:28:56.513 "ffdhe2048", 00:28:56.513 "ffdhe3072", 00:28:56.513 "ffdhe4096", 00:28:56.513 "ffdhe6144", 00:28:56.513 "ffdhe8192" 00:28:56.513 ] 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_nvme_set_hotplug", 00:28:56.513 "params": { 00:28:56.513 "period_us": 100000, 00:28:56.513 "enable": false 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_malloc_create", 00:28:56.513 "params": { 00:28:56.513 "name": "malloc0", 00:28:56.513 "num_blocks": 8192, 00:28:56.513 "block_size": 4096, 00:28:56.513 "physical_block_size": 4096, 00:28:56.513 "uuid": "d365f187-23a1-4800-95eb-2e1d8013a216", 00:28:56.513 "optimal_io_boundary": 0, 00:28:56.513 "md_size": 0, 00:28:56.513 "dif_type": 0, 00:28:56.513 "dif_is_head_of_md": false, 00:28:56.513 "dif_pi_format": 0 00:28:56.513 } 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "method": "bdev_wait_for_examine" 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "scsi", 00:28:56.513 "config": null 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "scheduler", 00:28:56.513 "config": [ 00:28:56.513 { 00:28:56.513 "method": "framework_set_scheduler", 00:28:56.513 "params": { 00:28:56.513 "name": "static" 00:28:56.513 } 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "vhost_scsi", 00:28:56.513 "config": [] 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "subsystem": "vhost_blk", 00:28:56.514 "config": [] 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "subsystem": "ublk", 00:28:56.514 "config": [ 00:28:56.514 { 00:28:56.514 "method": "ublk_create_target", 00:28:56.514 "params": { 00:28:56.514 "cpumask": "1" 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "ublk_start_disk", 00:28:56.514 "params": { 00:28:56.514 "bdev_name": "malloc0", 00:28:56.514 "ublk_id": 0, 00:28:56.514 "num_queues": 1, 00:28:56.514 "queue_depth": 128 00:28:56.514 } 00:28:56.514 } 00:28:56.514 ] 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "subsystem": "nbd", 00:28:56.514 "config": [] 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "subsystem": "nvmf", 00:28:56.514 "config": [ 00:28:56.514 { 00:28:56.514 "method": "nvmf_set_config", 00:28:56.514 "params": { 00:28:56.514 "discovery_filter": "match_any", 00:28:56.514 "admin_cmd_passthru": { 00:28:56.514 "identify_ctrlr": false 00:28:56.514 }, 00:28:56.514 "dhchap_digests": [ 00:28:56.514 "sha256", 00:28:56.514 "sha384", 00:28:56.514 "sha512" 00:28:56.514 ], 00:28:56.514 "dhchap_dhgroups": [ 00:28:56.514 "null", 00:28:56.514 "ffdhe2048", 00:28:56.514 "ffdhe3072", 00:28:56.514 "ffdhe4096", 00:28:56.514 "ffdhe6144", 00:28:56.514 "ffdhe8192" 00:28:56.514 ] 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "nvmf_set_max_subsystems", 00:28:56.514 "params": { 00:28:56.514 "max_subsystems": 1024 00:28:56.514 } 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "method": "nvmf_set_crdt", 00:28:56.514 "params": { 00:28:56.514 "crdt1": 0, 00:28:56.514 "crdt2": 0, 00:28:56.514 "crdt3": 0 00:28:56.514 } 00:28:56.514 } 00:28:56.514 ] 00:28:56.514 }, 00:28:56.514 { 00:28:56.514 "subsystem": "iscsi", 00:28:56.514 "config": [ 00:28:56.514 { 00:28:56.514 "method": "iscsi_set_options", 00:28:56.514 "params": { 00:28:56.514 "node_base": "iqn.2016-06.io.spdk", 00:28:56.514 "max_sessions": 128, 00:28:56.514 "max_connections_per_session": 2, 00:28:56.514 "max_queue_depth": 64, 00:28:56.514 "default_time2wait": 2, 00:28:56.514 "default_time2retain": 20, 00:28:56.514 "first_burst_length": 8192, 00:28:56.514 "immediate_data": true, 00:28:56.514 "allow_duplicated_isid": false, 00:28:56.514 "error_recovery_level": 0, 00:28:56.514 "nop_timeout": 60, 00:28:56.514 "nop_in_interval": 30, 00:28:56.514 "disable_chap": false, 00:28:56.514 "require_chap": false, 00:28:56.514 "mutual_chap": false, 00:28:56.514 "chap_group": 0, 00:28:56.514 "max_large_datain_per_connection": 64, 00:28:56.514 "max_r2t_per_connection": 4, 00:28:56.514 "pdu_pool_size": 36864, 00:28:56.514 "immediate_data_pool_size": 16384, 00:28:56.514 "data_out_pool_size": 2048 00:28:56.514 } 00:28:56.514 } 00:28:56.514 ] 00:28:56.514 } 00:28:56.514 ] 00:28:56.514 }' 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73583 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 73583 ']' 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 73583 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73583 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:56.514 killing process with pid 73583 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73583' 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 73583 00:28:56.514 18:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 73583 00:28:58.416 [2024-10-08 18:56:26.897334] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:28:58.416 [2024-10-08 18:56:26.932007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:28:58.416 [2024-10-08 18:56:26.932148] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:28:58.416 [2024-10-08 18:56:26.940995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:28:58.416 [2024-10-08 18:56:26.941059] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:28:58.416 [2024-10-08 18:56:26.941073] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:28:58.416 [2024-10-08 18:56:26.941098] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:28:58.416 [2024-10-08 18:56:26.941243] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:00.315 18:56:29 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73661 00:29:00.315 18:56:29 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73661 00:29:00.315 18:56:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:29:00.315 18:56:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:29:00.315 "subsystems": [ 00:29:00.315 { 00:29:00.315 "subsystem": "fsdev", 00:29:00.315 "config": [ 00:29:00.315 { 00:29:00.315 "method": "fsdev_set_opts", 00:29:00.315 "params": { 00:29:00.315 "fsdev_io_pool_size": 65535, 00:29:00.315 "fsdev_io_cache_size": 256 00:29:00.315 } 00:29:00.315 } 00:29:00.315 ] 00:29:00.315 }, 00:29:00.315 { 00:29:00.315 "subsystem": "keyring", 00:29:00.315 "config": [] 00:29:00.315 }, 00:29:00.315 { 00:29:00.315 "subsystem": "iobuf", 00:29:00.315 "config": [ 00:29:00.315 { 00:29:00.315 "method": "iobuf_set_options", 00:29:00.315 "params": { 00:29:00.315 "small_pool_count": 8192, 00:29:00.315 "large_pool_count": 1024, 00:29:00.315 "small_bufsize": 8192, 00:29:00.315 "large_bufsize": 135168 00:29:00.315 } 00:29:00.315 } 00:29:00.315 ] 00:29:00.315 }, 00:29:00.315 { 00:29:00.315 "subsystem": "sock", 00:29:00.315 "config": [ 00:29:00.315 { 00:29:00.315 "method": "sock_set_default_impl", 00:29:00.315 "params": { 00:29:00.315 "impl_name": "posix" 00:29:00.315 } 00:29:00.315 }, 00:29:00.315 { 00:29:00.315 "method": "sock_impl_set_options", 00:29:00.315 "params": { 00:29:00.315 "impl_name": "ssl", 00:29:00.315 "recv_buf_size": 4096, 00:29:00.315 "send_buf_size": 4096, 00:29:00.315 "enable_recv_pipe": true, 00:29:00.315 "enable_quickack": false, 00:29:00.315 "enable_placement_id": 0, 00:29:00.315 "enable_zerocopy_send_server": true, 00:29:00.315 "enable_zerocopy_send_client": false, 00:29:00.315 "zerocopy_threshold": 0, 00:29:00.315 "tls_version": 0, 00:29:00.315 "enable_ktls": false 00:29:00.315 } 00:29:00.315 }, 00:29:00.315 { 00:29:00.315 "method": "sock_impl_set_options", 00:29:00.315 "params": { 00:29:00.315 "impl_name": "posix", 00:29:00.315 "recv_buf_size": 2097152, 00:29:00.315 "send_buf_size": 2097152, 00:29:00.315 "enable_recv_pipe": true, 00:29:00.315 "enable_quickack": false, 00:29:00.315 "enable_placement_id": 0, 00:29:00.315 "enable_zerocopy_send_server": true, 00:29:00.315 "enable_zerocopy_send_client": false, 00:29:00.315 "zerocopy_threshold": 0, 00:29:00.315 "tls_version": 0, 00:29:00.315 "enable_ktls": false 00:29:00.315 } 00:29:00.315 } 00:29:00.315 ] 00:29:00.315 }, 00:29:00.315 { 00:29:00.315 "subsystem": "vmd", 00:29:00.315 "config": [] 00:29:00.315 }, 00:29:00.315 { 00:29:00.315 "subsystem": "accel", 00:29:00.315 "config": [ 00:29:00.315 { 00:29:00.315 "method": "accel_set_options", 00:29:00.315 "params": { 00:29:00.315 "small_cache_size": 128, 00:29:00.315 "large_cache_size": 16, 00:29:00.316 "task_count": 2048, 00:29:00.316 "sequence_count": 2048, 00:29:00.316 "buf_count": 2048 00:29:00.316 } 00:29:00.316 } 00:29:00.316 ] 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "bdev", 00:29:00.316 "config": [ 00:29:00.316 { 00:29:00.316 "method": "bdev_set_options", 00:29:00.316 "params": { 00:29:00.316 "bdev_io_pool_size": 65535, 00:29:00.316 "bdev_io_cache_size": 256, 00:29:00.316 "bdev_auto_examine": true, 00:29:00.316 "iobuf_small_cache_size": 128, 00:29:00.316 "iobuf_large_cache_size": 16 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "bdev_raid_set_options", 00:29:00.316 "params": { 00:29:00.316 "process_window_size_kb": 1024, 00:29:00.316 "process_max_bandwidth_mb_sec": 0 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "bdev_iscsi_set_options", 00:29:00.316 "params": { 00:29:00.316 "timeout_sec": 30 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "bdev_nvme_set_options", 00:29:00.316 "params": { 00:29:00.316 "action_on_timeout": "none", 00:29:00.316 "timeout_us": 0, 00:29:00.316 "timeout_admin_us": 0, 00:29:00.316 "keep_alive_timeout_ms": 10000, 00:29:00.316 "arbitration_burst": 0, 00:29:00.316 "low_priority_weight": 0, 00:29:00.316 "medium_priority_weight": 0, 00:29:00.316 "high_priority_weight": 0, 00:29:00.316 "nvme_adminq_poll_period_us": 10000, 00:29:00.316 "nvme_ioq_poll_period_us": 0, 00:29:00.316 "io_queue_requests": 0, 00:29:00.316 "delay_cmd_submit": true, 00:29:00.316 "transport_retry_count": 4, 00:29:00.316 "bdev_retry_count": 3, 00:29:00.316 "transport_ack_timeout": 0, 00:29:00.316 "ctrlr_loss_timeout_sec": 0, 00:29:00.316 "reconnect_delay_sec": 0, 00:29:00.316 "fast_io_fail_timeout_sec": 0, 00:29:00.316 "disable_auto_failback": false, 00:29:00.316 "generate_uuids": false, 00:29:00.316 "transport_tos": 0, 00:29:00.316 "nvme_error_stat": false, 00:29:00.316 "rdma_srq_size": 0, 00:29:00.316 "io_path_stat": false, 00:29:00.316 "allow_accel_sequence": false, 00:29:00.316 "rdma_max_cq_size": 0, 00:29:00.316 "rdma_cm_event_timeout_ms": 0, 00:29:00.316 "dhchap_digests": [ 00:29:00.316 "sha256", 00:29:00.316 "sha384", 00:29:00.316 "sha512" 00:29:00.316 ], 00:29:00.316 "dhchap_dhgroups": [ 00:29:00.316 "null", 00:29:00.316 "ffdhe2048", 00:29:00.316 "ffdhe3072", 00:29:00.316 "ffdhe4096", 00:29:00.316 "ffdhe6144", 00:29:00.316 "ffdhe8192" 00:29:00.316 ] 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "bdev_nvme_set_hotplug", 00:29:00.316 "params": { 00:29:00.316 "period_us": 100000, 00:29:00.316 "enable": false 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "bdev_malloc_create", 00:29:00.316 "params": { 00:29:00.316 "name": "malloc0", 00:29:00.316 "num_blocks": 8192, 00:29:00.316 "block_size": 4096, 00:29:00.316 "physical_block_size": 4096, 00:29:00.316 "uuid": "d365f187-23a1-4800-95eb-2e1d8013a216", 00:29:00.316 "optimal_io_boundary": 0, 00:29:00.316 "md_size": 0, 00:29:00.316 "dif_type": 0, 00:29:00.316 "dif_is_head_of_md": false, 00:29:00.316 "dif_pi_format": 0 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "bdev_wait_for_examine" 00:29:00.316 } 00:29:00.316 ] 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "scsi", 00:29:00.316 "config": null 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "scheduler", 00:29:00.316 "config": [ 00:29:00.316 { 00:29:00.316 "method": "framework_set_scheduler", 00:29:00.316 "params": { 00:29:00.316 "name": "static" 00:29:00.316 } 00:29:00.316 } 00:29:00.316 ] 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "vhost_scsi", 00:29:00.316 "config": [] 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "vhost_blk", 00:29:00.316 "config": [] 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "ublk", 00:29:00.316 "config": [ 00:29:00.316 { 00:29:00.316 "method": "ublk_create_target", 00:29:00.316 "params": { 00:29:00.316 "cpumask": "1" 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "ublk_start_disk", 00:29:00.316 "params": { 00:29:00.316 "bdev_name": "malloc0", 00:29:00.316 "ublk_id": 0, 00:29:00.316 "num_queues": 1, 00:29:00.316 "queue_depth": 128 00:29:00.316 } 00:29:00.316 } 00:29:00.316 ] 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "nbd", 00:29:00.316 "config": [] 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "nvmf", 00:29:00.316 "config": [ 00:29:00.316 { 00:29:00.316 "method": "nvmf_set_config", 00:29:00.316 "params": { 00:29:00.316 "discovery_filter": "match_any", 00:29:00.316 "admin_cmd_passthru": { 00:29:00.316 "identify_ctrlr": false 00:29:00.316 }, 00:29:00.316 "dhchap_digests": [ 00:29:00.316 "sha256", 00:29:00.316 "sha384", 00:29:00.316 "sha512" 00:29:00.316 ], 00:29:00.316 "dhchap_dhgroups": [ 00:29:00.316 "null", 00:29:00.316 "ffdhe2048", 00:29:00.316 "ffdhe3072", 00:29:00.316 "ffdhe4096", 00:29:00.316 "ffdhe6144", 00:29:00.316 "ffdhe8192" 00:29:00.316 ] 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "nvmf_set_max_subsystems", 00:29:00.316 "params": { 00:29:00.316 "max_subsystems": 1024 00:29:00.316 } 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "method": "nvmf_set_crdt", 00:29:00.316 "params": { 00:29:00.316 "crdt1": 0, 00:29:00.316 "crdt2": 0, 00:29:00.316 "crdt3": 0 00:29:00.316 } 00:29:00.316 } 00:29:00.316 ] 00:29:00.316 }, 00:29:00.316 { 00:29:00.316 "subsystem": "iscsi", 00:29:00.316 "config": [ 00:29:00.316 { 00:29:00.316 "method": "iscsi_set_options", 00:29:00.316 "params": { 00:29:00.316 "node_base": "iqn.2016-06.io.spdk", 00:29:00.316 "max_sessions": 128, 00:29:00.316 "max_connections_per_session": 2, 00:29:00.316 "max_queue_depth": 64, 00:29:00.316 "default_time2wait": 2, 00:29:00.316 "default_time2retain": 20, 00:29:00.316 "first_burst_length": 8192, 00:29:00.316 "immediate_data": true, 00:29:00.316 "allow_duplicated_isid": false, 00:29:00.316 "error_recovery_level": 0, 00:29:00.316 "nop_timeout": 60, 00:29:00.316 "nop_in_interval": 30, 00:29:00.316 "disable_chap": false, 00:29:00.316 "require_chap": false, 00:29:00.316 "mutual_chap": false, 00:29:00.316 "chap_group": 0, 00:29:00.316 "max_large_datain_per_connection": 64, 00:29:00.316 "max_r2t_per_connection": 4, 00:29:00.316 "pdu_pool_size": 36864, 00:29:00.316 "immediate_data_pool_size": 16384, 00:29:00.316 "data_out_pool_size": 2048 00:29:00.316 } 00:29:00.316 } 00:29:00.316 ] 00:29:00.316 } 00:29:00.316 ] 00:29:00.316 }' 00:29:00.316 18:56:29 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 73661 ']' 00:29:00.316 18:56:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.316 18:56:29 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.316 18:56:29 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.316 18:56:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.316 18:56:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:29:00.574 [2024-10-08 18:56:29.142496] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:29:00.574 [2024-10-08 18:56:29.143218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73661 ] 00:29:00.574 [2024-10-08 18:56:29.311651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.832 [2024-10-08 18:56:29.545119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.206 [2024-10-08 18:56:30.620987] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:02.206 [2024-10-08 18:56:30.622374] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:02.206 [2024-10-08 18:56:30.629151] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:29:02.206 [2024-10-08 18:56:30.629257] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:29:02.206 [2024-10-08 18:56:30.629267] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:29:02.206 [2024-10-08 18:56:30.629276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:29:02.206 [2024-10-08 18:56:30.637169] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:02.206 [2024-10-08 18:56:30.637202] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:02.206 [2024-10-08 18:56:30.645000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:02.206 [2024-10-08 18:56:30.645130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:29:02.206 [2024-10-08 18:56:30.661995] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73661 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 73661 ']' 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 73661 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73661 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:02.206 killing process with pid 73661 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73661' 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 73661 00:29:02.206 18:56:30 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 73661 00:29:04.106 [2024-10-08 18:56:32.404103] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:29:04.106 [2024-10-08 18:56:32.436067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:04.106 [2024-10-08 18:56:32.436250] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:29:04.106 [2024-10-08 18:56:32.444040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:04.106 [2024-10-08 18:56:32.444111] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:29:04.106 [2024-10-08 18:56:32.444121] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:29:04.106 [2024-10-08 18:56:32.444161] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:04.106 [2024-10-08 18:56:32.444349] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:06.007 18:56:34 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:29:06.007 00:29:06.007 real 0m11.296s 00:29:06.007 user 0m8.846s 00:29:06.007 sys 0m3.360s 00:29:06.007 18:56:34 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:06.007 18:56:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:29:06.007 ************************************ 00:29:06.007 END TEST test_save_ublk_config 00:29:06.007 ************************************ 00:29:06.007 18:56:34 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73748 00:29:06.007 18:56:34 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:29:06.007 18:56:34 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:06.007 18:56:34 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73748 00:29:06.007 18:56:34 ublk -- common/autotest_common.sh@831 -- # '[' -z 73748 ']' 00:29:06.007 18:56:34 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.007 18:56:34 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.007 18:56:34 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.007 18:56:34 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.007 18:56:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:29:06.007 [2024-10-08 18:56:34.744276] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:29:06.007 [2024-10-08 18:56:34.744456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73748 ] 00:29:06.333 [2024-10-08 18:56:34.923806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:06.592 [2024-10-08 18:56:35.139923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.592 [2024-10-08 18:56:35.139991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.525 18:56:36 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.525 18:56:36 ublk -- common/autotest_common.sh@864 -- # return 0 00:29:07.525 18:56:36 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:29:07.525 18:56:36 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:07.525 18:56:36 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.525 18:56:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:29:07.525 ************************************ 00:29:07.525 START TEST test_create_ublk 00:29:07.525 ************************************ 00:29:07.525 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:29:07.525 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:29:07.525 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.525 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:07.525 [2024-10-08 18:56:36.070978] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:07.525 [2024-10-08 18:56:36.073143] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:07.525 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.525 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:29:07.525 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:29:07.525 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.525 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:07.783 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:29:07.783 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.783 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:07.783 [2024-10-08 18:56:36.388170] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:29:07.783 [2024-10-08 18:56:36.388637] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:29:07.783 [2024-10-08 18:56:36.388657] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:29:07.783 [2024-10-08 18:56:36.388666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:29:07.783 [2024-10-08 18:56:36.396397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:07.783 [2024-10-08 18:56:36.396419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:07.783 [2024-10-08 18:56:36.404015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:07.783 [2024-10-08 18:56:36.404684] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:29:07.783 [2024-10-08 18:56:36.418079] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:29:07.783 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:29:07.783 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.783 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:07.783 18:56:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:29:07.783 { 00:29:07.783 "ublk_device": "/dev/ublkb0", 00:29:07.783 "id": 0, 00:29:07.783 "queue_depth": 512, 00:29:07.783 "num_queues": 4, 00:29:07.783 "bdev_name": "Malloc0" 00:29:07.783 } 00:29:07.783 ]' 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:29:07.783 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:29:08.042 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:29:08.042 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:29:08.042 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:29:08.042 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:29:08.042 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:29:08.042 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:29:08.042 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:29:08.042 18:56:36 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:29:08.042 18:56:36 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:29:08.300 fio: verification read phase will never start because write phase uses all of runtime 00:29:08.300 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:29:08.300 fio-3.35 00:29:08.300 Starting 1 process 00:29:18.419 00:29:18.419 fio_test: (groupid=0, jobs=1): err= 0: pid=73800: Tue Oct 8 18:56:46 2024 00:29:18.419 write: IOPS=15.4k, BW=60.0MiB/s (62.9MB/s)(600MiB/10001msec); 0 zone resets 00:29:18.419 clat (usec): min=40, max=4281, avg=64.25, stdev=102.16 00:29:18.419 lat (usec): min=40, max=4312, avg=64.73, stdev=102.18 00:29:18.419 clat percentiles (usec): 00:29:18.419 | 1.00th=[ 45], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:29:18.419 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:29:18.419 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 71], 95.00th=[ 79], 00:29:18.419 | 99.00th=[ 92], 99.50th=[ 100], 99.90th=[ 2147], 99.95th=[ 2835], 00:29:18.419 | 99.99th=[ 3556] 00:29:18.419 bw ( KiB/s): min=48352, max=66472, per=100.00%, avg=61505.26, stdev=4387.99, samples=19 00:29:18.419 iops : min=12088, max=16618, avg=15376.32, stdev=1097.00, samples=19 00:29:18.419 lat (usec) : 50=2.42%, 100=97.09%, 250=0.29%, 500=0.01%, 750=0.01% 00:29:18.419 lat (usec) : 1000=0.01% 00:29:18.419 lat (msec) : 2=0.07%, 4=0.11%, 10=0.01% 00:29:18.419 cpu : usr=3.40%, sys=10.61%, ctx=153638, majf=0, minf=797 00:29:18.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:18.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.419 issued rwts: total=0,153638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:18.419 00:29:18.419 Run status group 0 (all jobs): 00:29:18.419 WRITE: bw=60.0MiB/s (62.9MB/s), 60.0MiB/s-60.0MiB/s (62.9MB/s-62.9MB/s), io=600MiB (629MB), run=10001-10001msec 00:29:18.419 00:29:18.419 Disk stats (read/write): 00:29:18.419 ublkb0: ios=0/152005, merge=0/0, ticks=0/8587, in_queue=8587, util=99.11% 00:29:18.419 18:56:46 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:29:18.419 18:56:46 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.419 18:56:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.419 [2024-10-08 18:56:46.957599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:29:18.419 [2024-10-08 18:56:46.995481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:18.419 [2024-10-08 18:56:46.996485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:29:18.419 [2024-10-08 18:56:47.004005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:18.419 [2024-10-08 18:56:47.004312] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:29:18.419 [2024-10-08 18:56:47.004333] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.419 18:56:47 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.419 [2024-10-08 18:56:47.027112] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:29:18.419 request: 00:29:18.419 { 00:29:18.419 "ublk_id": 0, 00:29:18.419 "method": "ublk_stop_disk", 00:29:18.419 "req_id": 1 00:29:18.419 } 00:29:18.419 Got JSON-RPC error response 00:29:18.419 response: 00:29:18.419 { 00:29:18.419 "code": -19, 00:29:18.419 "message": "No such device" 00:29:18.419 } 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:18.419 18:56:47 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:18.419 [2024-10-08 18:56:47.042133] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:18.419 [2024-10-08 18:56:47.045621] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:18.419 [2024-10-08 18:56:47.045669] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.419 18:56:47 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.419 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.354 18:56:47 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:29:19.354 18:56:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.354 18:56:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:29:19.354 18:56:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:29:19.354 18:56:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:29:19.354 18:56:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.354 18:56:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:29:19.354 18:56:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:29:19.354 18:56:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:29:19.354 00:29:19.354 real 0m11.887s 00:29:19.354 user 0m0.751s 00:29:19.354 sys 0m1.186s 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.354 18:56:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:19.354 ************************************ 00:29:19.354 END TEST test_create_ublk 00:29:19.354 ************************************ 00:29:19.354 18:56:47 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:29:19.354 18:56:47 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:19.354 18:56:47 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:19.354 18:56:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:29:19.354 ************************************ 00:29:19.354 START TEST test_create_multi_ublk 00:29:19.354 ************************************ 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:19.354 [2024-10-08 18:56:48.015994] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:19.354 [2024-10-08 18:56:48.018313] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.354 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:19.613 [2024-10-08 18:56:48.306149] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:29:19.613 [2024-10-08 18:56:48.306636] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:29:19.613 [2024-10-08 18:56:48.306654] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:29:19.613 [2024-10-08 18:56:48.306669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:29:19.613 [2024-10-08 18:56:48.314018] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:19.613 [2024-10-08 18:56:48.314049] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:19.613 [2024-10-08 18:56:48.321983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:19.613 [2024-10-08 18:56:48.322606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:29:19.613 [2024-10-08 18:56:48.352979] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.613 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:20.179 [2024-10-08 18:56:48.668147] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:29:20.179 [2024-10-08 18:56:48.668643] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:29:20.179 [2024-10-08 18:56:48.668665] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:29:20.179 [2024-10-08 18:56:48.668686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:29:20.179 [2024-10-08 18:56:48.676030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:20.179 [2024-10-08 18:56:48.676072] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:20.179 [2024-10-08 18:56:48.683027] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:20.179 [2024-10-08 18:56:48.683708] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:29:20.179 [2024-10-08 18:56:48.692037] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.179 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:20.437 18:56:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.437 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:29:20.437 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:29:20.437 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.437 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:20.437 [2024-10-08 18:56:49.013146] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:29:20.437 [2024-10-08 18:56:49.013642] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:29:20.437 [2024-10-08 18:56:49.013661] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:29:20.437 [2024-10-08 18:56:49.013672] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:29:20.437 [2024-10-08 18:56:49.019993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:20.437 [2024-10-08 18:56:49.020023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:20.437 [2024-10-08 18:56:49.027981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:20.438 [2024-10-08 18:56:49.028636] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:29:20.438 [2024-10-08 18:56:49.037026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:29:20.438 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.438 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:29:20.438 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:20.438 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:29:20.438 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.438 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:20.696 [2024-10-08 18:56:49.347180] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:29:20.696 [2024-10-08 18:56:49.347754] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:29:20.696 [2024-10-08 18:56:49.347778] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:29:20.696 [2024-10-08 18:56:49.347788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:29:20.696 [2024-10-08 18:56:49.355040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:20.696 [2024-10-08 18:56:49.355066] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:20.696 [2024-10-08 18:56:49.363007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:20.696 [2024-10-08 18:56:49.363685] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:29:20.696 [2024-10-08 18:56:49.372010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:29:20.696 { 00:29:20.696 "ublk_device": "/dev/ublkb0", 00:29:20.696 "id": 0, 00:29:20.696 "queue_depth": 512, 00:29:20.696 "num_queues": 4, 00:29:20.696 "bdev_name": "Malloc0" 00:29:20.696 }, 00:29:20.696 { 00:29:20.696 "ublk_device": "/dev/ublkb1", 00:29:20.696 "id": 1, 00:29:20.696 "queue_depth": 512, 00:29:20.696 "num_queues": 4, 00:29:20.696 "bdev_name": "Malloc1" 00:29:20.696 }, 00:29:20.696 { 00:29:20.696 "ublk_device": "/dev/ublkb2", 00:29:20.696 "id": 2, 00:29:20.696 "queue_depth": 512, 00:29:20.696 "num_queues": 4, 00:29:20.696 "bdev_name": "Malloc2" 00:29:20.696 }, 00:29:20.696 { 00:29:20.696 "ublk_device": "/dev/ublkb3", 00:29:20.696 "id": 3, 00:29:20.696 "queue_depth": 512, 00:29:20.696 "num_queues": 4, 00:29:20.696 "bdev_name": "Malloc3" 00:29:20.696 } 00:29:20.696 ]' 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:29:20.696 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:29:20.955 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:29:21.213 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:29:21.487 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:29:21.487 18:56:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:29:21.487 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:29:21.763 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:21.764 [2024-10-08 18:56:50.270157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:29:21.764 [2024-10-08 18:56:50.309520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:21.764 [2024-10-08 18:56:50.310718] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:29:21.764 [2024-10-08 18:56:50.317046] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:21.764 [2024-10-08 18:56:50.317359] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:29:21.764 [2024-10-08 18:56:50.317376] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:21.764 [2024-10-08 18:56:50.333082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:29:21.764 [2024-10-08 18:56:50.372510] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:21.764 [2024-10-08 18:56:50.373665] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:29:21.764 [2024-10-08 18:56:50.378024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:21.764 [2024-10-08 18:56:50.378390] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:29:21.764 [2024-10-08 18:56:50.378408] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:21.764 [2024-10-08 18:56:50.382158] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:29:21.764 [2024-10-08 18:56:50.412063] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:21.764 [2024-10-08 18:56:50.412931] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:29:21.764 [2024-10-08 18:56:50.419017] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:21.764 [2024-10-08 18:56:50.419379] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:29:21.764 [2024-10-08 18:56:50.419397] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:21.764 [2024-10-08 18:56:50.427087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:29:21.764 [2024-10-08 18:56:50.459036] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:21.764 [2024-10-08 18:56:50.459779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:29:21.764 [2024-10-08 18:56:50.467110] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:21.764 [2024-10-08 18:56:50.467390] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:29:21.764 [2024-10-08 18:56:50.467404] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:21.764 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:29:22.022 [2024-10-08 18:56:50.760092] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:22.022 [2024-10-08 18:56:50.763285] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:22.022 [2024-10-08 18:56:50.763333] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:29:22.281 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:29:22.281 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:22.281 18:56:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:22.281 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.281 18:56:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:22.849 18:56:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.849 18:56:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:22.849 18:56:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:22.849 18:56:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:22.849 18:56:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:23.416 18:56:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.416 18:56:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:23.416 18:56:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:29:23.416 18:56:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.416 18:56:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:23.675 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.675 18:56:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:29:23.675 18:56:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:29:23.675 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.675 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:23.934 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.934 18:56:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:29:23.934 18:56:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:29:23.934 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.934 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:23.934 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.934 18:56:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:29:23.934 18:56:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:29:24.193 ************************************ 00:29:24.193 END TEST test_create_multi_ublk 00:29:24.193 ************************************ 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:29:24.193 00:29:24.193 real 0m4.766s 00:29:24.193 user 0m1.128s 00:29:24.193 sys 0m0.205s 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:24.193 18:56:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:29:24.193 18:56:52 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:24.193 18:56:52 ublk -- ublk/ublk.sh@147 -- # cleanup 00:29:24.193 18:56:52 ublk -- ublk/ublk.sh@130 -- # killprocess 73748 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@950 -- # '[' -z 73748 ']' 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@954 -- # kill -0 73748 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@955 -- # uname 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73748 00:29:24.193 killing process with pid 73748 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73748' 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@969 -- # kill 73748 00:29:24.193 18:56:52 ublk -- common/autotest_common.sh@974 -- # wait 73748 00:29:25.570 [2024-10-08 18:56:54.054529] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:25.570 [2024-10-08 18:56:54.054585] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:27.025 00:29:27.025 real 0m32.541s 00:29:27.025 user 0m46.345s 00:29:27.025 sys 0m10.605s 00:29:27.025 18:56:55 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:27.025 18:56:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:29:27.025 ************************************ 00:29:27.025 END TEST ublk 00:29:27.025 ************************************ 00:29:27.025 18:56:55 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:29:27.025 18:56:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:27.025 18:56:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:27.025 18:56:55 -- common/autotest_common.sh@10 -- # set +x 00:29:27.025 ************************************ 00:29:27.025 START TEST ublk_recovery 00:29:27.025 ************************************ 00:29:27.025 18:56:55 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:29:27.025 * Looking for test storage... 00:29:27.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:29:27.025 18:56:55 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:27.025 18:56:55 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:29:27.025 18:56:55 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:27.025 18:56:55 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:27.025 18:56:55 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:29:27.026 18:56:55 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:27.026 18:56:55 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:27.026 18:56:55 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:27.026 18:56:55 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:27.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.026 --rc genhtml_branch_coverage=1 00:29:27.026 --rc genhtml_function_coverage=1 00:29:27.026 --rc genhtml_legend=1 00:29:27.026 --rc geninfo_all_blocks=1 00:29:27.026 --rc geninfo_unexecuted_blocks=1 00:29:27.026 00:29:27.026 ' 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:27.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.026 --rc genhtml_branch_coverage=1 00:29:27.026 --rc genhtml_function_coverage=1 00:29:27.026 --rc genhtml_legend=1 00:29:27.026 --rc geninfo_all_blocks=1 00:29:27.026 --rc geninfo_unexecuted_blocks=1 00:29:27.026 00:29:27.026 ' 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:27.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.026 --rc genhtml_branch_coverage=1 00:29:27.026 --rc genhtml_function_coverage=1 00:29:27.026 --rc genhtml_legend=1 00:29:27.026 --rc geninfo_all_blocks=1 00:29:27.026 --rc geninfo_unexecuted_blocks=1 00:29:27.026 00:29:27.026 ' 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:27.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:27.026 --rc genhtml_branch_coverage=1 00:29:27.026 --rc genhtml_function_coverage=1 00:29:27.026 --rc genhtml_legend=1 00:29:27.026 --rc geninfo_all_blocks=1 00:29:27.026 --rc geninfo_unexecuted_blocks=1 00:29:27.026 00:29:27.026 ' 00:29:27.026 18:56:55 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:29:27.026 18:56:55 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:29:27.026 18:56:55 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:29:27.026 18:56:55 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:29:27.026 18:56:55 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:29:27.026 18:56:55 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:29:27.026 18:56:55 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:29:27.026 18:56:55 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:29:27.026 18:56:55 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:29:27.026 18:56:55 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:29:27.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.026 18:56:55 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74187 00:29:27.026 18:56:55 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:27.026 18:56:55 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74187 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 74187 ']' 00:29:27.026 18:56:55 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:27.026 18:56:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:27.285 [2024-10-08 18:56:55.862766] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:29:27.285 [2024-10-08 18:56:55.862909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74187 ] 00:29:27.285 [2024-10-08 18:56:56.027221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:27.544 [2024-10-08 18:56:56.248512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.544 [2024-10-08 18:56:56.248542] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.479 18:56:57 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:28.479 18:56:57 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:29:28.479 18:56:57 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:29:28.479 18:56:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.479 18:56:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:28.479 [2024-10-08 18:56:57.190980] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:28.479 [2024-10-08 18:56:57.193610] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:28.479 18:56:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.479 18:56:57 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:29:28.479 18:56:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.479 18:56:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:28.737 malloc0 00:29:28.737 18:56:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.737 18:56:57 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:29:28.737 18:56:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.737 18:56:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:28.737 [2024-10-08 18:56:57.356174] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:29:28.737 [2024-10-08 18:56:57.356300] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:29:28.737 [2024-10-08 18:56:57.356316] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:29:28.737 [2024-10-08 18:56:57.356326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:29:28.737 [2024-10-08 18:56:57.364030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:29:28.737 [2024-10-08 18:56:57.364063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:29:28.737 [2024-10-08 18:56:57.372014] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:29:28.737 [2024-10-08 18:56:57.372187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:29:28.737 [2024-10-08 18:56:57.395008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:29:28.737 1 00:29:28.737 18:56:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.737 18:56:57 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:29:29.672 18:56:58 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74222 00:29:29.672 18:56:58 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:29:29.672 18:56:58 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:29:29.929 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:29.929 fio-3.35 00:29:29.929 Starting 1 process 00:29:35.196 18:57:03 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74187 00:29:35.196 18:57:03 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:29:40.461 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74187 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:29:40.461 18:57:08 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74335 00:29:40.461 18:57:08 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:29:40.461 18:57:08 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:40.461 18:57:08 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74335 00:29:40.461 18:57:08 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 74335 ']' 00:29:40.461 18:57:08 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.461 18:57:08 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:40.461 18:57:08 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.461 18:57:08 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:40.461 18:57:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.461 [2024-10-08 18:57:08.562326] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:29:40.461 [2024-10-08 18:57:08.562505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74335 ] 00:29:40.461 [2024-10-08 18:57:08.735335] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:40.461 [2024-10-08 18:57:08.990272] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.461 [2024-10-08 18:57:08.990303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.404 18:57:09 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:41.404 18:57:09 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:29:41.404 18:57:09 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:29:41.404 18:57:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.404 18:57:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.404 [2024-10-08 18:57:09.912987] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:29:41.404 [2024-10-08 18:57:09.915550] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:29:41.404 18:57:09 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.404 18:57:09 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:29:41.404 18:57:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.404 18:57:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.404 malloc0 00:29:41.404 18:57:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.404 18:57:10 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:29:41.404 18:57:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.404 18:57:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.404 [2024-10-08 18:57:10.071143] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:29:41.404 [2024-10-08 18:57:10.071190] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:29:41.404 [2024-10-08 18:57:10.071203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:29:41.404 [2024-10-08 18:57:10.079029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:29:41.404 [2024-10-08 18:57:10.079057] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:29:41.404 [2024-10-08 18:57:10.079068] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:29:41.404 1 00:29:41.404 [2024-10-08 18:57:10.079174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:29:41.404 18:57:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.404 18:57:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74222 00:29:41.404 [2024-10-08 18:57:10.086989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:29:41.404 [2024-10-08 18:57:10.093466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:29:41.404 [2024-10-08 18:57:10.101179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:29:41.404 [2024-10-08 18:57:10.101208] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:30:37.639 00:30:37.639 fio_test: (groupid=0, jobs=1): err= 0: pid=74225: Tue Oct 8 18:57:58 2024 00:30:37.639 read: IOPS=18.2k, BW=71.0MiB/s (74.5MB/s)(4261MiB/60003msec) 00:30:37.639 slat (usec): min=2, max=1447, avg= 7.07, stdev= 2.73 00:30:37.639 clat (usec): min=1100, max=6699.0k, avg=3442.18, stdev=50474.03 00:30:37.639 lat (usec): min=1107, max=6699.0k, avg=3449.25, stdev=50474.03 00:30:37.639 clat percentiles (usec): 00:30:37.639 | 1.00th=[ 2212], 5.00th=[ 2343], 10.00th=[ 2376], 20.00th=[ 2474], 00:30:37.639 | 30.00th=[ 2638], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3163], 00:30:37.639 | 70.00th=[ 3359], 80.00th=[ 3490], 90.00th=[ 3654], 95.00th=[ 4228], 00:30:37.640 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7111], 99.95th=[ 8455], 00:30:37.640 | 99.99th=[13173] 00:30:37.640 bw ( KiB/s): min= 4872, max=100832, per=100.00%, avg=80834.51, stdev=14087.48, samples=107 00:30:37.640 iops : min= 1218, max=25208, avg=20208.63, stdev=3521.87, samples=107 00:30:37.640 write: IOPS=18.2k, BW=71.0MiB/s (74.4MB/s)(4259MiB/60003msec); 0 zone resets 00:30:37.640 slat (usec): min=2, max=520, avg= 7.22, stdev= 2.50 00:30:37.640 clat (usec): min=1077, max=6699.1k, avg=3585.05, stdev=52089.95 00:30:37.640 lat (usec): min=1084, max=6699.1k, avg=3592.28, stdev=52089.95 00:30:37.640 clat percentiles (usec): 00:30:37.640 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:30:37.640 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3294], 00:30:37.640 | 70.00th=[ 3490], 80.00th=[ 3621], 90.00th=[ 3752], 95.00th=[ 4228], 00:30:37.640 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 7177], 99.95th=[ 8586], 00:30:37.640 | 99.99th=[13304] 00:30:37.640 bw ( KiB/s): min= 5000, max=100840, per=100.00%, avg=80780.43, stdev=13949.20, samples=107 00:30:37.640 iops : min= 1250, max=25210, avg=20195.09, stdev=3487.32, samples=107 00:30:37.640 lat (msec) : 2=0.24%, 4=93.51%, 10=6.22%, 20=0.02%, >=2000=0.01% 00:30:37.640 cpu : usr=8.72%, sys=25.93%, ctx=73938, majf=0, minf=13 00:30:37.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:30:37.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:37.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:37.640 issued rwts: total=1090783,1090251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:37.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:37.640 00:30:37.640 Run status group 0 (all jobs): 00:30:37.640 READ: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=4261MiB (4468MB), run=60003-60003msec 00:30:37.640 WRITE: bw=71.0MiB/s (74.4MB/s), 71.0MiB/s-71.0MiB/s (74.4MB/s-74.4MB/s), io=4259MiB (4466MB), run=60003-60003msec 00:30:37.640 00:30:37.640 Disk stats (read/write): 00:30:37.640 ublkb1: ios=1088079/1087601, merge=0/0, ticks=3665641/3687201, in_queue=7352843, util=99.93% 00:30:37.640 18:57:58 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.640 [2024-10-08 18:57:58.700166] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:30:37.640 [2024-10-08 18:57:58.730115] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:30:37.640 [2024-10-08 18:57:58.730317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:30:37.640 [2024-10-08 18:57:58.738000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:30:37.640 [2024-10-08 18:57:58.738130] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:30:37.640 [2024-10-08 18:57:58.738150] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.640 18:57:58 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.640 [2024-10-08 18:57:58.754095] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:37.640 [2024-10-08 18:57:58.757288] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:37.640 [2024-10-08 18:57:58.757332] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:37.640 18:57:58 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:37.640 18:57:58 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:30:37.640 18:57:58 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74335 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 74335 ']' 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 74335 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74335 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:37.640 killing process with pid 74335 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74335' 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@969 -- # kill 74335 00:30:37.640 18:57:58 ublk_recovery -- common/autotest_common.sh@974 -- # wait 74335 00:30:37.640 [2024-10-08 18:58:00.467821] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:30:37.640 [2024-10-08 18:58:00.467885] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:30:37.640 00:30:37.640 real 1m6.516s 00:30:37.640 user 1m48.712s 00:30:37.640 sys 0m33.829s 00:30:37.640 18:58:02 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:37.640 18:58:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.640 ************************************ 00:30:37.640 END TEST ublk_recovery 00:30:37.640 ************************************ 00:30:37.640 18:58:02 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@256 -- # timing_exit lib 00:30:37.640 18:58:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:37.640 18:58:02 -- common/autotest_common.sh@10 -- # set +x 00:30:37.640 18:58:02 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:30:37.640 18:58:02 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:37.640 18:58:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:37.640 18:58:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:37.640 18:58:02 -- common/autotest_common.sh@10 -- # set +x 00:30:37.640 ************************************ 00:30:37.640 START TEST ftl 00:30:37.640 ************************************ 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:37.640 * Looking for test storage... 00:30:37.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:37.640 18:58:02 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.640 18:58:02 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.640 18:58:02 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.640 18:58:02 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.640 18:58:02 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.640 18:58:02 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.640 18:58:02 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.640 18:58:02 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.640 18:58:02 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.640 18:58:02 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.640 18:58:02 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.640 18:58:02 ftl -- scripts/common.sh@344 -- # case "$op" in 00:30:37.640 18:58:02 ftl -- scripts/common.sh@345 -- # : 1 00:30:37.640 18:58:02 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.640 18:58:02 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.640 18:58:02 ftl -- scripts/common.sh@365 -- # decimal 1 00:30:37.640 18:58:02 ftl -- scripts/common.sh@353 -- # local d=1 00:30:37.640 18:58:02 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.640 18:58:02 ftl -- scripts/common.sh@355 -- # echo 1 00:30:37.640 18:58:02 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.640 18:58:02 ftl -- scripts/common.sh@366 -- # decimal 2 00:30:37.640 18:58:02 ftl -- scripts/common.sh@353 -- # local d=2 00:30:37.640 18:58:02 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.640 18:58:02 ftl -- scripts/common.sh@355 -- # echo 2 00:30:37.640 18:58:02 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.640 18:58:02 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.640 18:58:02 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.640 18:58:02 ftl -- scripts/common.sh@368 -- # return 0 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:37.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.640 --rc genhtml_branch_coverage=1 00:30:37.640 --rc genhtml_function_coverage=1 00:30:37.640 --rc genhtml_legend=1 00:30:37.640 --rc geninfo_all_blocks=1 00:30:37.640 --rc geninfo_unexecuted_blocks=1 00:30:37.640 00:30:37.640 ' 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:37.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.640 --rc genhtml_branch_coverage=1 00:30:37.640 --rc genhtml_function_coverage=1 00:30:37.640 --rc genhtml_legend=1 00:30:37.640 --rc geninfo_all_blocks=1 00:30:37.640 --rc geninfo_unexecuted_blocks=1 00:30:37.640 00:30:37.640 ' 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:37.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.640 --rc genhtml_branch_coverage=1 00:30:37.640 --rc genhtml_function_coverage=1 00:30:37.640 --rc genhtml_legend=1 00:30:37.640 --rc geninfo_all_blocks=1 00:30:37.640 --rc geninfo_unexecuted_blocks=1 00:30:37.640 00:30:37.640 ' 00:30:37.640 18:58:02 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:37.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.640 --rc genhtml_branch_coverage=1 00:30:37.640 --rc genhtml_function_coverage=1 00:30:37.640 --rc genhtml_legend=1 00:30:37.640 --rc geninfo_all_blocks=1 00:30:37.640 --rc geninfo_unexecuted_blocks=1 00:30:37.640 00:30:37.640 ' 00:30:37.640 18:58:02 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:37.640 18:58:02 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:30:37.641 18:58:02 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:37.641 18:58:02 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:37.641 18:58:02 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:37.641 18:58:02 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:37.641 18:58:02 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:37.641 18:58:02 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:37.641 18:58:02 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:37.641 18:58:02 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:37.641 18:58:02 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:37.641 18:58:02 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:37.641 18:58:02 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:37.641 18:58:02 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:37.641 18:58:02 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:37.641 18:58:02 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:37.641 18:58:02 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:37.641 18:58:02 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:37.641 18:58:02 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:37.641 18:58:02 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:37.641 18:58:02 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:37.641 18:58:02 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:37.641 18:58:02 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:37.641 18:58:02 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:37.641 18:58:02 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:37.641 18:58:02 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:37.641 18:58:02 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:37.641 18:58:02 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:37.641 18:58:02 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:37.641 18:58:02 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:37.641 18:58:02 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:30:37.641 18:58:02 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:30:37.641 18:58:02 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:30:37.641 18:58:02 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:30:37.641 18:58:02 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:37.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:37.641 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:37.641 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:37.641 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:37.641 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:37.641 18:58:03 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:30:37.641 18:58:03 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75142 00:30:37.641 18:58:03 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75142 00:30:37.641 18:58:03 ftl -- common/autotest_common.sh@831 -- # '[' -z 75142 ']' 00:30:37.641 18:58:03 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.641 18:58:03 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:37.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.641 18:58:03 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.641 18:58:03 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:37.641 18:58:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:37.641 [2024-10-08 18:58:03.177348] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:37.641 [2024-10-08 18:58:03.177773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75142 ] 00:30:37.641 [2024-10-08 18:58:03.346935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.641 [2024-10-08 18:58:03.625534] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.641 18:58:04 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:37.641 18:58:04 ftl -- common/autotest_common.sh@864 -- # return 0 00:30:37.641 18:58:04 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:30:37.641 18:58:04 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:30:37.641 18:58:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:30:37.641 18:58:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:37.641 18:58:05 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:30:37.641 18:58:05 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:30:37.641 18:58:05 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:30:37.641 18:58:06 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:30:37.641 18:58:06 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:30:37.641 18:58:06 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:30:37.641 18:58:06 ftl -- ftl/ftl.sh@50 -- # break 00:30:37.641 18:58:06 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:30:37.641 18:58:06 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:30:37.641 18:58:06 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:30:37.641 18:58:06 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:30:37.900 18:58:06 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:30:37.900 18:58:06 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:30:37.900 18:58:06 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:30:37.900 18:58:06 ftl -- ftl/ftl.sh@63 -- # break 00:30:37.900 18:58:06 ftl -- ftl/ftl.sh@66 -- # killprocess 75142 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@950 -- # '[' -z 75142 ']' 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@954 -- # kill -0 75142 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@955 -- # uname 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75142 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:37.900 killing process with pid 75142 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75142' 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@969 -- # kill 75142 00:30:37.900 18:58:06 ftl -- common/autotest_common.sh@974 -- # wait 75142 00:30:40.431 18:58:09 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:30:40.431 18:58:09 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:30:40.431 18:58:09 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:40.431 18:58:09 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:40.431 18:58:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:40.431 ************************************ 00:30:40.431 START TEST ftl_fio_basic 00:30:40.431 ************************************ 00:30:40.431 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:30:40.431 * Looking for test storage... 00:30:40.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:40.431 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:40.431 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:30:40.431 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.691 18:58:09 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.692 --rc genhtml_branch_coverage=1 00:30:40.692 --rc genhtml_function_coverage=1 00:30:40.692 --rc genhtml_legend=1 00:30:40.692 --rc geninfo_all_blocks=1 00:30:40.692 --rc geninfo_unexecuted_blocks=1 00:30:40.692 00:30:40.692 ' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.692 --rc genhtml_branch_coverage=1 00:30:40.692 --rc genhtml_function_coverage=1 00:30:40.692 --rc genhtml_legend=1 00:30:40.692 --rc geninfo_all_blocks=1 00:30:40.692 --rc geninfo_unexecuted_blocks=1 00:30:40.692 00:30:40.692 ' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.692 --rc genhtml_branch_coverage=1 00:30:40.692 --rc genhtml_function_coverage=1 00:30:40.692 --rc genhtml_legend=1 00:30:40.692 --rc geninfo_all_blocks=1 00:30:40.692 --rc geninfo_unexecuted_blocks=1 00:30:40.692 00:30:40.692 ' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:40.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.692 --rc genhtml_branch_coverage=1 00:30:40.692 --rc genhtml_function_coverage=1 00:30:40.692 --rc genhtml_legend=1 00:30:40.692 --rc geninfo_all_blocks=1 00:30:40.692 --rc geninfo_unexecuted_blocks=1 00:30:40.692 00:30:40.692 ' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75292 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75292 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 75292 ']' 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:40.692 18:58:09 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:40.692 [2024-10-08 18:58:09.401397] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:30:40.692 [2024-10-08 18:58:09.401739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75292 ] 00:30:40.951 [2024-10-08 18:58:09.571510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:41.210 [2024-10-08 18:58:09.778118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.210 [2024-10-08 18:58:09.778260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.210 [2024-10-08 18:58:09.778292] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.146 18:58:10 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:42.146 18:58:10 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:30:42.146 18:58:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:42.146 18:58:10 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:30:42.146 18:58:10 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:42.146 18:58:10 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:30:42.146 18:58:10 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:30:42.146 18:58:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:42.405 18:58:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:42.405 18:58:10 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:30:42.405 18:58:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:42.405 18:58:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:30:42.405 18:58:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:42.405 18:58:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:30:42.405 18:58:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:30:42.405 18:58:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:42.665 { 00:30:42.665 "name": "nvme0n1", 00:30:42.665 "aliases": [ 00:30:42.665 "cf4c83ad-bf30-4f6b-8049-58da14fa6a38" 00:30:42.665 ], 00:30:42.665 "product_name": "NVMe disk", 00:30:42.665 "block_size": 4096, 00:30:42.665 "num_blocks": 1310720, 00:30:42.665 "uuid": "cf4c83ad-bf30-4f6b-8049-58da14fa6a38", 00:30:42.665 "numa_id": -1, 00:30:42.665 "assigned_rate_limits": { 00:30:42.665 "rw_ios_per_sec": 0, 00:30:42.665 "rw_mbytes_per_sec": 0, 00:30:42.665 "r_mbytes_per_sec": 0, 00:30:42.665 "w_mbytes_per_sec": 0 00:30:42.665 }, 00:30:42.665 "claimed": false, 00:30:42.665 "zoned": false, 00:30:42.665 "supported_io_types": { 00:30:42.665 "read": true, 00:30:42.665 "write": true, 00:30:42.665 "unmap": true, 00:30:42.665 "flush": true, 00:30:42.665 "reset": true, 00:30:42.665 "nvme_admin": true, 00:30:42.665 "nvme_io": true, 00:30:42.665 "nvme_io_md": false, 00:30:42.665 "write_zeroes": true, 00:30:42.665 "zcopy": false, 00:30:42.665 "get_zone_info": false, 00:30:42.665 "zone_management": false, 00:30:42.665 "zone_append": false, 00:30:42.665 "compare": true, 00:30:42.665 "compare_and_write": false, 00:30:42.665 "abort": true, 00:30:42.665 "seek_hole": false, 00:30:42.665 "seek_data": false, 00:30:42.665 "copy": true, 00:30:42.665 "nvme_iov_md": false 00:30:42.665 }, 00:30:42.665 "driver_specific": { 00:30:42.665 "nvme": [ 00:30:42.665 { 00:30:42.665 "pci_address": "0000:00:11.0", 00:30:42.665 "trid": { 00:30:42.665 "trtype": "PCIe", 00:30:42.665 "traddr": "0000:00:11.0" 00:30:42.665 }, 00:30:42.665 "ctrlr_data": { 00:30:42.665 "cntlid": 0, 00:30:42.665 "vendor_id": "0x1b36", 00:30:42.665 "model_number": "QEMU NVMe Ctrl", 00:30:42.665 "serial_number": "12341", 00:30:42.665 "firmware_revision": "8.0.0", 00:30:42.665 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:42.665 "oacs": { 00:30:42.665 "security": 0, 00:30:42.665 "format": 1, 00:30:42.665 "firmware": 0, 00:30:42.665 "ns_manage": 1 00:30:42.665 }, 00:30:42.665 "multi_ctrlr": false, 00:30:42.665 "ana_reporting": false 00:30:42.665 }, 00:30:42.665 "vs": { 00:30:42.665 "nvme_version": "1.4" 00:30:42.665 }, 00:30:42.665 "ns_data": { 00:30:42.665 "id": 1, 00:30:42.665 "can_share": false 00:30:42.665 } 00:30:42.665 } 00:30:42.665 ], 00:30:42.665 "mp_policy": "active_passive" 00:30:42.665 } 00:30:42.665 } 00:30:42.665 ]' 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:42.665 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:42.923 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:30:42.923 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:43.182 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=8a22bd17-dcdc-4890-a439-ddf2025ebc5d 00:30:43.182 18:58:11 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8a22bd17-dcdc-4890-a439-ddf2025ebc5d 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:30:43.441 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:43.700 { 00:30:43.700 "name": "3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21", 00:30:43.700 "aliases": [ 00:30:43.700 "lvs/nvme0n1p0" 00:30:43.700 ], 00:30:43.700 "product_name": "Logical Volume", 00:30:43.700 "block_size": 4096, 00:30:43.700 "num_blocks": 26476544, 00:30:43.700 "uuid": "3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21", 00:30:43.700 "assigned_rate_limits": { 00:30:43.700 "rw_ios_per_sec": 0, 00:30:43.700 "rw_mbytes_per_sec": 0, 00:30:43.700 "r_mbytes_per_sec": 0, 00:30:43.700 "w_mbytes_per_sec": 0 00:30:43.700 }, 00:30:43.700 "claimed": false, 00:30:43.700 "zoned": false, 00:30:43.700 "supported_io_types": { 00:30:43.700 "read": true, 00:30:43.700 "write": true, 00:30:43.700 "unmap": true, 00:30:43.700 "flush": false, 00:30:43.700 "reset": true, 00:30:43.700 "nvme_admin": false, 00:30:43.700 "nvme_io": false, 00:30:43.700 "nvme_io_md": false, 00:30:43.700 "write_zeroes": true, 00:30:43.700 "zcopy": false, 00:30:43.700 "get_zone_info": false, 00:30:43.700 "zone_management": false, 00:30:43.700 "zone_append": false, 00:30:43.700 "compare": false, 00:30:43.700 "compare_and_write": false, 00:30:43.700 "abort": false, 00:30:43.700 "seek_hole": true, 00:30:43.700 "seek_data": true, 00:30:43.700 "copy": false, 00:30:43.700 "nvme_iov_md": false 00:30:43.700 }, 00:30:43.700 "driver_specific": { 00:30:43.700 "lvol": { 00:30:43.700 "lvol_store_uuid": "8a22bd17-dcdc-4890-a439-ddf2025ebc5d", 00:30:43.700 "base_bdev": "nvme0n1", 00:30:43.700 "thin_provision": true, 00:30:43.700 "num_allocated_clusters": 0, 00:30:43.700 "snapshot": false, 00:30:43.700 "clone": false, 00:30:43.700 "esnap_clone": false 00:30:43.700 } 00:30:43.700 } 00:30:43.700 } 00:30:43.700 ]' 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:30:43.700 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:43.959 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:43.959 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:43.959 18:58:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:43.959 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:43.959 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:43.959 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:30:43.959 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:30:43.959 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:44.218 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:44.218 { 00:30:44.218 "name": "3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21", 00:30:44.218 "aliases": [ 00:30:44.218 "lvs/nvme0n1p0" 00:30:44.218 ], 00:30:44.218 "product_name": "Logical Volume", 00:30:44.218 "block_size": 4096, 00:30:44.218 "num_blocks": 26476544, 00:30:44.218 "uuid": "3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21", 00:30:44.218 "assigned_rate_limits": { 00:30:44.218 "rw_ios_per_sec": 0, 00:30:44.218 "rw_mbytes_per_sec": 0, 00:30:44.218 "r_mbytes_per_sec": 0, 00:30:44.218 "w_mbytes_per_sec": 0 00:30:44.218 }, 00:30:44.218 "claimed": false, 00:30:44.218 "zoned": false, 00:30:44.218 "supported_io_types": { 00:30:44.218 "read": true, 00:30:44.218 "write": true, 00:30:44.218 "unmap": true, 00:30:44.218 "flush": false, 00:30:44.218 "reset": true, 00:30:44.218 "nvme_admin": false, 00:30:44.218 "nvme_io": false, 00:30:44.218 "nvme_io_md": false, 00:30:44.218 "write_zeroes": true, 00:30:44.218 "zcopy": false, 00:30:44.218 "get_zone_info": false, 00:30:44.218 "zone_management": false, 00:30:44.218 "zone_append": false, 00:30:44.218 "compare": false, 00:30:44.218 "compare_and_write": false, 00:30:44.218 "abort": false, 00:30:44.218 "seek_hole": true, 00:30:44.218 "seek_data": true, 00:30:44.218 "copy": false, 00:30:44.218 "nvme_iov_md": false 00:30:44.218 }, 00:30:44.218 "driver_specific": { 00:30:44.218 "lvol": { 00:30:44.218 "lvol_store_uuid": "8a22bd17-dcdc-4890-a439-ddf2025ebc5d", 00:30:44.218 "base_bdev": "nvme0n1", 00:30:44.218 "thin_provision": true, 00:30:44.218 "num_allocated_clusters": 0, 00:30:44.218 "snapshot": false, 00:30:44.218 "clone": false, 00:30:44.218 "esnap_clone": false 00:30:44.218 } 00:30:44.218 } 00:30:44.218 } 00:30:44.218 ]' 00:30:44.218 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:44.218 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:30:44.218 18:58:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:44.477 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:44.477 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:44.477 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:30:44.477 18:58:13 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:30:44.477 18:58:13 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:30:44.747 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:30:44.747 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:45.013 { 00:30:45.013 "name": "3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21", 00:30:45.013 "aliases": [ 00:30:45.013 "lvs/nvme0n1p0" 00:30:45.013 ], 00:30:45.013 "product_name": "Logical Volume", 00:30:45.013 "block_size": 4096, 00:30:45.013 "num_blocks": 26476544, 00:30:45.013 "uuid": "3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21", 00:30:45.013 "assigned_rate_limits": { 00:30:45.013 "rw_ios_per_sec": 0, 00:30:45.013 "rw_mbytes_per_sec": 0, 00:30:45.013 "r_mbytes_per_sec": 0, 00:30:45.013 "w_mbytes_per_sec": 0 00:30:45.013 }, 00:30:45.013 "claimed": false, 00:30:45.013 "zoned": false, 00:30:45.013 "supported_io_types": { 00:30:45.013 "read": true, 00:30:45.013 "write": true, 00:30:45.013 "unmap": true, 00:30:45.013 "flush": false, 00:30:45.013 "reset": true, 00:30:45.013 "nvme_admin": false, 00:30:45.013 "nvme_io": false, 00:30:45.013 "nvme_io_md": false, 00:30:45.013 "write_zeroes": true, 00:30:45.013 "zcopy": false, 00:30:45.013 "get_zone_info": false, 00:30:45.013 "zone_management": false, 00:30:45.013 "zone_append": false, 00:30:45.013 "compare": false, 00:30:45.013 "compare_and_write": false, 00:30:45.013 "abort": false, 00:30:45.013 "seek_hole": true, 00:30:45.013 "seek_data": true, 00:30:45.013 "copy": false, 00:30:45.013 "nvme_iov_md": false 00:30:45.013 }, 00:30:45.013 "driver_specific": { 00:30:45.013 "lvol": { 00:30:45.013 "lvol_store_uuid": "8a22bd17-dcdc-4890-a439-ddf2025ebc5d", 00:30:45.013 "base_bdev": "nvme0n1", 00:30:45.013 "thin_provision": true, 00:30:45.013 "num_allocated_clusters": 0, 00:30:45.013 "snapshot": false, 00:30:45.013 "clone": false, 00:30:45.013 "esnap_clone": false 00:30:45.013 } 00:30:45.013 } 00:30:45.013 } 00:30:45.013 ]' 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:30:45.013 18:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21 -c nvc0n1p0 --l2p_dram_limit 60 00:30:45.274 [2024-10-08 18:58:13.837201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.837266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:45.274 [2024-10-08 18:58:13.837295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:45.274 [2024-10-08 18:58:13.837314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.837418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.837439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:45.274 [2024-10-08 18:58:13.837459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:30:45.274 [2024-10-08 18:58:13.837473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.837528] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:45.274 [2024-10-08 18:58:13.838641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:45.274 [2024-10-08 18:58:13.838690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.838706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:45.274 [2024-10-08 18:58:13.838724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:30:45.274 [2024-10-08 18:58:13.838738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.838869] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 794253d5-b8ff-40b0-aa55-75ccc962ea2c 00:30:45.274 [2024-10-08 18:58:13.840472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.840529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:45.274 [2024-10-08 18:58:13.840563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:45.274 [2024-10-08 18:58:13.840580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.848373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.848416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:45.274 [2024-10-08 18:58:13.848433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.706 ms 00:30:45.274 [2024-10-08 18:58:13.848449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.848580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.848602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:45.274 [2024-10-08 18:58:13.848617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:30:45.274 [2024-10-08 18:58:13.848637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.848752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.848775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:45.274 [2024-10-08 18:58:13.848790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:45.274 [2024-10-08 18:58:13.848808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.848849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:45.274 [2024-10-08 18:58:13.854588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.854624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:45.274 [2024-10-08 18:58:13.854644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.748 ms 00:30:45.274 [2024-10-08 18:58:13.854657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.854708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.854722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:45.274 [2024-10-08 18:58:13.854739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:45.274 [2024-10-08 18:58:13.854751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.854819] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:45.274 [2024-10-08 18:58:13.855024] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:45.274 [2024-10-08 18:58:13.855059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:45.274 [2024-10-08 18:58:13.855077] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:45.274 [2024-10-08 18:58:13.855102] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:45.274 [2024-10-08 18:58:13.855122] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:45.274 [2024-10-08 18:58:13.855141] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:45.274 [2024-10-08 18:58:13.855154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:45.274 [2024-10-08 18:58:13.855171] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:45.274 [2024-10-08 18:58:13.855184] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:45.274 [2024-10-08 18:58:13.855203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.855216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:45.274 [2024-10-08 18:58:13.855234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:30:45.274 [2024-10-08 18:58:13.855246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.855361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.274 [2024-10-08 18:58:13.855386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:45.274 [2024-10-08 18:58:13.855414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:45.274 [2024-10-08 18:58:13.855428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.274 [2024-10-08 18:58:13.855552] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:45.274 [2024-10-08 18:58:13.855568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:45.274 [2024-10-08 18:58:13.855586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:45.274 [2024-10-08 18:58:13.855599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.274 [2024-10-08 18:58:13.855616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:45.274 [2024-10-08 18:58:13.855629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:45.274 [2024-10-08 18:58:13.855646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:45.274 [2024-10-08 18:58:13.855659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:45.274 [2024-10-08 18:58:13.855676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:45.274 [2024-10-08 18:58:13.855689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:45.274 [2024-10-08 18:58:13.855705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:45.274 [2024-10-08 18:58:13.855717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:45.274 [2024-10-08 18:58:13.855733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:45.274 [2024-10-08 18:58:13.855746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:45.274 [2024-10-08 18:58:13.855762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:45.274 [2024-10-08 18:58:13.855775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.274 [2024-10-08 18:58:13.855793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:45.274 [2024-10-08 18:58:13.855806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:45.274 [2024-10-08 18:58:13.855821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.274 [2024-10-08 18:58:13.855834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:45.274 [2024-10-08 18:58:13.855852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:45.274 [2024-10-08 18:58:13.855865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:45.274 [2024-10-08 18:58:13.855881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:45.274 [2024-10-08 18:58:13.855894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:45.274 [2024-10-08 18:58:13.855911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:45.274 [2024-10-08 18:58:13.855923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:45.274 [2024-10-08 18:58:13.855939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:45.274 [2024-10-08 18:58:13.855952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:45.274 [2024-10-08 18:58:13.855979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:45.275 [2024-10-08 18:58:13.855992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:45.275 [2024-10-08 18:58:13.856008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:45.275 [2024-10-08 18:58:13.856021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:45.275 [2024-10-08 18:58:13.856040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:45.275 [2024-10-08 18:58:13.856052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:45.275 [2024-10-08 18:58:13.856069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:45.275 [2024-10-08 18:58:13.856081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:45.275 [2024-10-08 18:58:13.856097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:45.275 [2024-10-08 18:58:13.856110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:45.275 [2024-10-08 18:58:13.856126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:45.275 [2024-10-08 18:58:13.856155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.275 [2024-10-08 18:58:13.856172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:45.275 [2024-10-08 18:58:13.856184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:45.275 [2024-10-08 18:58:13.856200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.275 [2024-10-08 18:58:13.856212] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:45.275 [2024-10-08 18:58:13.856229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:45.275 [2024-10-08 18:58:13.856249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:45.275 [2024-10-08 18:58:13.856269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.275 [2024-10-08 18:58:13.856283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:45.275 [2024-10-08 18:58:13.856306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:45.275 [2024-10-08 18:58:13.856320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:45.275 [2024-10-08 18:58:13.856339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:45.275 [2024-10-08 18:58:13.856352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:45.275 [2024-10-08 18:58:13.856372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:45.275 [2024-10-08 18:58:13.856409] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:45.275 [2024-10-08 18:58:13.856444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:45.275 [2024-10-08 18:58:13.856460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:45.275 [2024-10-08 18:58:13.856484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:45.275 [2024-10-08 18:58:13.856502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:45.275 [2024-10-08 18:58:13.856522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:45.275 [2024-10-08 18:58:13.856537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:45.275 [2024-10-08 18:58:13.856557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:45.275 [2024-10-08 18:58:13.856571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:45.275 [2024-10-08 18:58:13.856589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:45.275 [2024-10-08 18:58:13.856603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:45.275 [2024-10-08 18:58:13.856623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:45.275 [2024-10-08 18:58:13.856637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:45.275 [2024-10-08 18:58:13.856657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:45.275 [2024-10-08 18:58:13.856671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:45.275 [2024-10-08 18:58:13.856688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:45.275 [2024-10-08 18:58:13.856701] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:45.275 [2024-10-08 18:58:13.856719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:45.275 [2024-10-08 18:58:13.856734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:45.275 [2024-10-08 18:58:13.856751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:45.275 [2024-10-08 18:58:13.856765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:45.275 [2024-10-08 18:58:13.856782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:45.275 [2024-10-08 18:58:13.856797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.275 [2024-10-08 18:58:13.856814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:45.275 [2024-10-08 18:58:13.856829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:30:45.275 [2024-10-08 18:58:13.856845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.275 [2024-10-08 18:58:13.856917] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:45.275 [2024-10-08 18:58:13.856944] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:48.560 [2024-10-08 18:58:16.934760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:16.934866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:48.560 [2024-10-08 18:58:16.934888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3077.819 ms 00:30:48.560 [2024-10-08 18:58:16.934907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:16.986271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:16.986345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:48.560 [2024-10-08 18:58:16.986369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.942 ms 00:30:48.560 [2024-10-08 18:58:16.986390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:16.986600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:16.986636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:48.560 [2024-10-08 18:58:16.986662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:30:48.560 [2024-10-08 18:58:16.986686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.037006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.037073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:48.560 [2024-10-08 18:58:17.037091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.200 ms 00:30:48.560 [2024-10-08 18:58:17.037108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.037164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.037183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:48.560 [2024-10-08 18:58:17.037197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:48.560 [2024-10-08 18:58:17.037217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.037737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.037768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:48.560 [2024-10-08 18:58:17.037782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:30:48.560 [2024-10-08 18:58:17.037798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.037931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.037973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:48.560 [2024-10-08 18:58:17.038005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:30:48.560 [2024-10-08 18:58:17.038025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.059709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.059766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:48.560 [2024-10-08 18:58:17.059784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.644 ms 00:30:48.560 [2024-10-08 18:58:17.059805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.073600] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:30:48.560 [2024-10-08 18:58:17.090721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.090793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:48.560 [2024-10-08 18:58:17.090816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.772 ms 00:30:48.560 [2024-10-08 18:58:17.090830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.164558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.164634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:48.560 [2024-10-08 18:58:17.164660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.653 ms 00:30:48.560 [2024-10-08 18:58:17.164675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.164934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.164984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:48.560 [2024-10-08 18:58:17.165007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:30:48.560 [2024-10-08 18:58:17.165030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.560 [2024-10-08 18:58:17.205302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.560 [2024-10-08 18:58:17.205366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:48.560 [2024-10-08 18:58:17.205392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.154 ms 00:30:48.560 [2024-10-08 18:58:17.205407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.561 [2024-10-08 18:58:17.242976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.561 [2024-10-08 18:58:17.243034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:48.561 [2024-10-08 18:58:17.243058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.492 ms 00:30:48.561 [2024-10-08 18:58:17.243071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.561 [2024-10-08 18:58:17.243940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.561 [2024-10-08 18:58:17.243986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:48.561 [2024-10-08 18:58:17.244007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:30:48.561 [2024-10-08 18:58:17.244020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.819 [2024-10-08 18:58:17.350913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.819 [2024-10-08 18:58:17.351004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:48.819 [2024-10-08 18:58:17.351035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.794 ms 00:30:48.819 [2024-10-08 18:58:17.351049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.819 [2024-10-08 18:58:17.390829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.819 [2024-10-08 18:58:17.390890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:48.819 [2024-10-08 18:58:17.390919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.652 ms 00:30:48.819 [2024-10-08 18:58:17.390932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.819 [2024-10-08 18:58:17.430741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.819 [2024-10-08 18:58:17.430798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:48.819 [2024-10-08 18:58:17.430821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.733 ms 00:30:48.819 [2024-10-08 18:58:17.430834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.819 [2024-10-08 18:58:17.469977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.819 [2024-10-08 18:58:17.470027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:48.819 [2024-10-08 18:58:17.470049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.080 ms 00:30:48.819 [2024-10-08 18:58:17.470062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.819 [2024-10-08 18:58:17.470127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.819 [2024-10-08 18:58:17.470142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:48.819 [2024-10-08 18:58:17.470163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:48.819 [2024-10-08 18:58:17.470175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.819 [2024-10-08 18:58:17.470386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:48.819 [2024-10-08 18:58:17.470417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:48.819 [2024-10-08 18:58:17.470437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:48.819 [2024-10-08 18:58:17.470451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.819 [2024-10-08 18:58:17.471787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3633.976 ms, result 0 00:30:48.819 { 00:30:48.819 "name": "ftl0", 00:30:48.819 "uuid": "794253d5-b8ff-40b0-aa55-75ccc962ea2c" 00:30:48.819 } 00:30:48.819 18:58:17 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:30:48.819 18:58:17 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:30:48.819 18:58:17 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:48.819 18:58:17 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:30:48.819 18:58:17 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:48.819 18:58:17 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:48.819 18:58:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:49.077 18:58:17 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:30:49.336 [ 00:30:49.336 { 00:30:49.336 "name": "ftl0", 00:30:49.336 "aliases": [ 00:30:49.336 "794253d5-b8ff-40b0-aa55-75ccc962ea2c" 00:30:49.336 ], 00:30:49.336 "product_name": "FTL disk", 00:30:49.336 "block_size": 4096, 00:30:49.336 "num_blocks": 20971520, 00:30:49.336 "uuid": "794253d5-b8ff-40b0-aa55-75ccc962ea2c", 00:30:49.336 "assigned_rate_limits": { 00:30:49.336 "rw_ios_per_sec": 0, 00:30:49.336 "rw_mbytes_per_sec": 0, 00:30:49.336 "r_mbytes_per_sec": 0, 00:30:49.336 "w_mbytes_per_sec": 0 00:30:49.336 }, 00:30:49.336 "claimed": false, 00:30:49.336 "zoned": false, 00:30:49.336 "supported_io_types": { 00:30:49.336 "read": true, 00:30:49.336 "write": true, 00:30:49.336 "unmap": true, 00:30:49.336 "flush": true, 00:30:49.336 "reset": false, 00:30:49.336 "nvme_admin": false, 00:30:49.336 "nvme_io": false, 00:30:49.336 "nvme_io_md": false, 00:30:49.336 "write_zeroes": true, 00:30:49.336 "zcopy": false, 00:30:49.336 "get_zone_info": false, 00:30:49.336 "zone_management": false, 00:30:49.336 "zone_append": false, 00:30:49.336 "compare": false, 00:30:49.336 "compare_and_write": false, 00:30:49.336 "abort": false, 00:30:49.336 "seek_hole": false, 00:30:49.336 "seek_data": false, 00:30:49.336 "copy": false, 00:30:49.336 "nvme_iov_md": false 00:30:49.336 }, 00:30:49.336 "driver_specific": { 00:30:49.336 "ftl": { 00:30:49.336 "base_bdev": "3e20184d-3dd9-4eb7-89f9-0e2cfa21eb21", 00:30:49.336 "cache": "nvc0n1p0" 00:30:49.336 } 00:30:49.336 } 00:30:49.336 } 00:30:49.336 ] 00:30:49.336 18:58:17 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:30:49.336 18:58:17 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:30:49.336 18:58:17 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:49.593 18:58:18 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:30:49.593 18:58:18 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:49.851 [2024-10-08 18:58:18.504704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.851 [2024-10-08 18:58:18.504779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:49.851 [2024-10-08 18:58:18.504799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:49.851 [2024-10-08 18:58:18.504817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.851 [2024-10-08 18:58:18.504859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:49.851 [2024-10-08 18:58:18.509169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.851 [2024-10-08 18:58:18.509210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:49.851 [2024-10-08 18:58:18.509231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.275 ms 00:30:49.851 [2024-10-08 18:58:18.509244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.851 [2024-10-08 18:58:18.509715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.851 [2024-10-08 18:58:18.509743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:49.851 [2024-10-08 18:58:18.509761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:30:49.851 [2024-10-08 18:58:18.509779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.851 [2024-10-08 18:58:18.512518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.851 [2024-10-08 18:58:18.512543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:49.851 [2024-10-08 18:58:18.512563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:30:49.851 [2024-10-08 18:58:18.512576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.851 [2024-10-08 18:58:18.517979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.851 [2024-10-08 18:58:18.518021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:49.851 [2024-10-08 18:58:18.518039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.365 ms 00:30:49.851 [2024-10-08 18:58:18.518051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.852 [2024-10-08 18:58:18.557415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.852 [2024-10-08 18:58:18.557467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:49.852 [2024-10-08 18:58:18.557489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.237 ms 00:30:49.852 [2024-10-08 18:58:18.557502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.852 [2024-10-08 18:58:18.581887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.852 [2024-10-08 18:58:18.581941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:49.852 [2024-10-08 18:58:18.581971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.293 ms 00:30:49.852 [2024-10-08 18:58:18.581986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.852 [2024-10-08 18:58:18.582244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.852 [2024-10-08 18:58:18.582261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:49.852 [2024-10-08 18:58:18.582278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:30:49.852 [2024-10-08 18:58:18.582291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.111 [2024-10-08 18:58:18.621363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.111 [2024-10-08 18:58:18.621427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:50.111 [2024-10-08 18:58:18.621450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.024 ms 00:30:50.111 [2024-10-08 18:58:18.621463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.111 [2024-10-08 18:58:18.660407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.111 [2024-10-08 18:58:18.660466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:50.111 [2024-10-08 18:58:18.660489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.861 ms 00:30:50.111 [2024-10-08 18:58:18.660502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.111 [2024-10-08 18:58:18.698611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.111 [2024-10-08 18:58:18.698661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:50.111 [2024-10-08 18:58:18.698683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.037 ms 00:30:50.111 [2024-10-08 18:58:18.698697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.111 [2024-10-08 18:58:18.738970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.111 [2024-10-08 18:58:18.739032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:50.111 [2024-10-08 18:58:18.739055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.101 ms 00:30:50.111 [2024-10-08 18:58:18.739068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.111 [2024-10-08 18:58:18.739163] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:50.112 [2024-10-08 18:58:18.739184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.739986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:50.112 [2024-10-08 18:58:18.740685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:50.113 [2024-10-08 18:58:18.740886] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:50.113 [2024-10-08 18:58:18.740902] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 794253d5-b8ff-40b0-aa55-75ccc962ea2c 00:30:50.113 [2024-10-08 18:58:18.740916] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:50.113 [2024-10-08 18:58:18.740935] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:50.113 [2024-10-08 18:58:18.740947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:50.113 [2024-10-08 18:58:18.740972] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:50.113 [2024-10-08 18:58:18.740985] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:50.113 [2024-10-08 18:58:18.741001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:50.113 [2024-10-08 18:58:18.741014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:50.113 [2024-10-08 18:58:18.741028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:50.113 [2024-10-08 18:58:18.741040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:50.113 [2024-10-08 18:58:18.741055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.113 [2024-10-08 18:58:18.741068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:50.113 [2024-10-08 18:58:18.741085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:30:50.113 [2024-10-08 18:58:18.741102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.113 [2024-10-08 18:58:18.762111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.113 [2024-10-08 18:58:18.762152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:50.113 [2024-10-08 18:58:18.762171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.920 ms 00:30:50.113 [2024-10-08 18:58:18.762183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.113 [2024-10-08 18:58:18.762773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:50.113 [2024-10-08 18:58:18.762798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:50.113 [2024-10-08 18:58:18.762816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:30:50.113 [2024-10-08 18:58:18.762830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.113 [2024-10-08 18:58:18.834031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.113 [2024-10-08 18:58:18.834078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:50.113 [2024-10-08 18:58:18.834099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.113 [2024-10-08 18:58:18.834112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.113 [2024-10-08 18:58:18.834192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.113 [2024-10-08 18:58:18.834207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:50.113 [2024-10-08 18:58:18.834228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.113 [2024-10-08 18:58:18.834240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.113 [2024-10-08 18:58:18.834391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.113 [2024-10-08 18:58:18.834408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:50.113 [2024-10-08 18:58:18.834425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.113 [2024-10-08 18:58:18.834437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.113 [2024-10-08 18:58:18.834476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.113 [2024-10-08 18:58:18.834489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:50.113 [2024-10-08 18:58:18.834506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.113 [2024-10-08 18:58:18.834522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:18.970901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.371 [2024-10-08 18:58:18.970966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:50.371 [2024-10-08 18:58:18.970989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.371 [2024-10-08 18:58:18.971003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:19.076715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.371 [2024-10-08 18:58:19.076776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:50.371 [2024-10-08 18:58:19.076804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.371 [2024-10-08 18:58:19.076817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:19.076975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.371 [2024-10-08 18:58:19.076991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:50.371 [2024-10-08 18:58:19.077008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.371 [2024-10-08 18:58:19.077020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:19.077111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.371 [2024-10-08 18:58:19.077126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:50.371 [2024-10-08 18:58:19.077143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.371 [2024-10-08 18:58:19.077156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:19.077290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.371 [2024-10-08 18:58:19.077306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:50.371 [2024-10-08 18:58:19.077323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.371 [2024-10-08 18:58:19.077335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:19.077399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.371 [2024-10-08 18:58:19.077414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:50.371 [2024-10-08 18:58:19.077430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.371 [2024-10-08 18:58:19.077444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:19.077501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.371 [2024-10-08 18:58:19.077518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:50.371 [2024-10-08 18:58:19.077534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.371 [2024-10-08 18:58:19.077547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:19.077612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:50.371 [2024-10-08 18:58:19.077626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:50.371 [2024-10-08 18:58:19.077642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:50.371 [2024-10-08 18:58:19.077655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:50.371 [2024-10-08 18:58:19.077837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 573.102 ms, result 0 00:30:50.371 true 00:30:50.371 18:58:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75292 00:30:50.372 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 75292 ']' 00:30:50.372 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 75292 00:30:50.372 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:30:50.372 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:50.372 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75292 00:30:50.629 killing process with pid 75292 00:30:50.629 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:50.629 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:50.629 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75292' 00:30:50.629 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 75292 00:30:50.629 18:58:19 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 75292 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:55.894 18:58:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:30:55.894 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:30:55.894 fio-3.35 00:30:55.894 Starting 1 thread 00:31:01.160 00:31:01.160 test: (groupid=0, jobs=1): err= 0: pid=75515: Tue Oct 8 18:58:29 2024 00:31:01.160 read: IOPS=1023, BW=68.0MiB/s (71.3MB/s)(255MiB/3744msec) 00:31:01.160 slat (nsec): min=4503, max=43818, avg=6720.30, stdev=2787.06 00:31:01.160 clat (usec): min=308, max=894, avg=429.95, stdev=56.28 00:31:01.160 lat (usec): min=314, max=900, avg=436.67, stdev=56.94 00:31:01.160 clat percentiles (usec): 00:31:01.160 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 367], 00:31:01.160 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 424], 60.00th=[ 437], 00:31:01.160 | 70.00th=[ 453], 80.00th=[ 486], 90.00th=[ 502], 95.00th=[ 515], 00:31:01.160 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 676], 99.95th=[ 832], 00:31:01.160 | 99.99th=[ 898] 00:31:01.160 write: IOPS=1031, BW=68.5MiB/s (71.8MB/s)(256MiB/3740msec); 0 zone resets 00:31:01.160 slat (nsec): min=16134, max=87539, avg=21282.92, stdev=4881.13 00:31:01.160 clat (usec): min=348, max=973, avg=503.72, stdev=64.80 00:31:01.160 lat (usec): min=367, max=1020, avg=525.01, stdev=65.46 00:31:01.160 clat percentiles (usec): 00:31:01.160 | 1.00th=[ 379], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 449], 00:31:01.160 | 30.00th=[ 461], 40.00th=[ 490], 50.00th=[ 506], 60.00th=[ 515], 00:31:01.160 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 594], 00:31:01.160 | 99.00th=[ 758], 99.50th=[ 816], 99.90th=[ 898], 99.95th=[ 963], 00:31:01.160 | 99.99th=[ 971] 00:31:01.160 bw ( KiB/s): min=65824, max=74120, per=100.00%, avg=70234.29, stdev=2803.55, samples=7 00:31:01.160 iops : min= 968, max= 1090, avg=1032.86, stdev=41.23, samples=7 00:31:01.160 lat (usec) : 500=67.37%, 750=32.08%, 1000=0.55% 00:31:01.160 cpu : usr=99.06%, sys=0.19%, ctx=5, majf=0, minf=1169 00:31:01.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.160 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:01.160 00:31:01.160 Run status group 0 (all jobs): 00:31:01.160 READ: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=255MiB (267MB), run=3744-3744msec 00:31:01.160 WRITE: bw=68.5MiB/s (71.8MB/s), 68.5MiB/s-68.5MiB/s (71.8MB/s-71.8MB/s), io=256MiB (269MB), run=3740-3740msec 00:31:03.062 ----------------------------------------------------- 00:31:03.062 Suppressions used: 00:31:03.062 count bytes template 00:31:03.062 1 5 /usr/src/fio/parse.c 00:31:03.062 1 8 libtcmalloc_minimal.so 00:31:03.062 1 904 libcrypto.so 00:31:03.062 ----------------------------------------------------- 00:31:03.062 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:03.062 18:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:31:03.331 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:31:03.331 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:31:03.331 fio-3.35 00:31:03.331 Starting 2 threads 00:31:35.437 00:31:35.437 first_half: (groupid=0, jobs=1): err= 0: pid=75618: Tue Oct 8 18:58:59 2024 00:31:35.437 read: IOPS=2480, BW=9920KiB/s (10.2MB/s)(255MiB/26308msec) 00:31:35.437 slat (nsec): min=3666, max=44127, avg=6436.65, stdev=2012.44 00:31:35.437 clat (usec): min=780, max=298098, avg=38612.67, stdev=20503.00 00:31:35.437 lat (usec): min=787, max=298103, avg=38619.10, stdev=20503.20 00:31:35.437 clat percentiles (msec): 00:31:35.437 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 35], 00:31:35.438 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:31:35.438 | 70.00th=[ 36], 80.00th=[ 38], 90.00th=[ 42], 95.00th=[ 47], 00:31:35.438 | 99.00th=[ 153], 99.50th=[ 182], 99.90th=[ 236], 99.95th=[ 262], 00:31:35.438 | 99.99th=[ 288] 00:31:35.438 write: IOPS=2917, BW=11.4MiB/s (12.0MB/s)(256MiB/22460msec); 0 zone resets 00:31:35.438 slat (usec): min=4, max=407, avg= 8.66, stdev= 5.55 00:31:35.438 clat (usec): min=385, max=100141, avg=12883.01, stdev=21113.13 00:31:35.438 lat (usec): min=399, max=100151, avg=12891.67, stdev=21113.32 00:31:35.438 clat percentiles (usec): 00:31:35.438 | 1.00th=[ 848], 5.00th=[ 1139], 10.00th=[ 1319], 20.00th=[ 1713], 00:31:35.438 | 30.00th=[ 3359], 40.00th=[ 5276], 50.00th=[ 6259], 60.00th=[ 6980], 00:31:35.438 | 70.00th=[ 8291], 80.00th=[13042], 90.00th=[33817], 95.00th=[77071], 00:31:35.438 | 99.00th=[90702], 99.50th=[92799], 99.90th=[95945], 99.95th=[98042], 00:31:35.438 | 99.99th=[99091] 00:31:35.438 bw ( KiB/s): min= 24, max=42816, per=80.21%, avg=18724.29, stdev=13640.49, samples=28 00:31:35.438 iops : min= 6, max=10704, avg=4681.14, stdev=3410.39, samples=28 00:31:35.438 lat (usec) : 500=0.01%, 750=0.15%, 1000=1.08% 00:31:35.438 lat (msec) : 2=10.97%, 4=4.72%, 10=21.84%, 20=7.46%, 50=47.35% 00:31:35.438 lat (msec) : 100=5.26%, 250=1.13%, 500=0.04% 00:31:35.438 cpu : usr=99.12%, sys=0.19%, ctx=38, majf=0, minf=5607 00:31:35.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:35.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.438 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.438 issued rwts: total=65245,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.438 second_half: (groupid=0, jobs=1): err= 0: pid=75619: Tue Oct 8 18:58:59 2024 00:31:35.438 read: IOPS=2491, BW=9965KiB/s (10.2MB/s)(255MiB/26165msec) 00:31:35.438 slat (nsec): min=3766, max=52644, avg=6610.31, stdev=2210.05 00:31:35.438 clat (usec): min=713, max=309669, avg=39206.51, stdev=19497.39 00:31:35.438 lat (usec): min=721, max=309676, avg=39213.12, stdev=19497.52 00:31:35.438 clat percentiles (msec): 00:31:35.438 | 1.00th=[ 6], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 35], 00:31:35.438 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:31:35.438 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 52], 00:31:35.438 | 99.00th=[ 148], 99.50th=[ 176], 99.90th=[ 201], 99.95th=[ 213], 00:31:35.438 | 99.99th=[ 305] 00:31:35.438 write: IOPS=3220, BW=12.6MiB/s (13.2MB/s)(256MiB/20352msec); 0 zone resets 00:31:35.438 slat (usec): min=4, max=209, avg= 8.57, stdev= 4.21 00:31:35.438 clat (usec): min=449, max=100052, avg=12083.64, stdev=21020.80 00:31:35.438 lat (usec): min=461, max=100060, avg=12092.21, stdev=21020.92 00:31:35.438 clat percentiles (usec): 00:31:35.438 | 1.00th=[ 922], 5.00th=[ 1172], 10.00th=[ 1336], 20.00th=[ 1598], 00:31:35.438 | 30.00th=[ 1942], 40.00th=[ 3523], 50.00th=[ 5080], 60.00th=[ 6194], 00:31:35.438 | 70.00th=[ 8160], 80.00th=[12911], 90.00th=[21627], 95.00th=[76022], 00:31:35.438 | 99.00th=[90702], 99.50th=[92799], 99.90th=[94897], 99.95th=[96994], 00:31:35.438 | 99.99th=[99091] 00:31:35.438 bw ( KiB/s): min= 136, max=44240, per=86.38%, avg=20164.92, stdev=13210.21, samples=26 00:31:35.438 iops : min= 34, max=11060, avg=5041.23, stdev=3302.55, samples=26 00:31:35.438 lat (usec) : 500=0.01%, 750=0.11%, 1000=0.75% 00:31:35.438 lat (msec) : 2=14.83%, 4=6.66%, 10=14.89%, 20=8.82%, 50=47.38% 00:31:35.438 lat (msec) : 100=5.27%, 250=1.28%, 500=0.01% 00:31:35.438 cpu : usr=99.21%, sys=0.19%, ctx=41, majf=0, minf=5514 00:31:35.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:35.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.438 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.438 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.438 00:31:35.438 Run status group 0 (all jobs): 00:31:35.438 READ: bw=19.4MiB/s (20.3MB/s), 9920KiB/s-9965KiB/s (10.2MB/s-10.2MB/s), io=509MiB (534MB), run=26165-26308msec 00:31:35.438 WRITE: bw=22.8MiB/s (23.9MB/s), 11.4MiB/s-12.6MiB/s (12.0MB/s-13.2MB/s), io=512MiB (537MB), run=20352-22460msec 00:31:35.438 ----------------------------------------------------- 00:31:35.438 Suppressions used: 00:31:35.438 count bytes template 00:31:35.438 2 10 /usr/src/fio/parse.c 00:31:35.438 2 192 /usr/src/fio/iolog.c 00:31:35.438 1 8 libtcmalloc_minimal.so 00:31:35.438 1 904 libcrypto.so 00:31:35.438 ----------------------------------------------------- 00:31:35.438 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:35.438 18:59:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:31:35.438 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:31:35.438 fio-3.35 00:31:35.438 Starting 1 thread 00:31:50.317 00:31:50.317 test: (groupid=0, jobs=1): err= 0: pid=75959: Tue Oct 8 18:59:17 2024 00:31:50.317 read: IOPS=7457, BW=29.1MiB/s (30.5MB/s)(255MiB/8743msec) 00:31:50.317 slat (nsec): min=3762, max=30837, avg=5701.97, stdev=1579.38 00:31:50.317 clat (usec): min=699, max=33900, avg=17153.12, stdev=827.48 00:31:50.317 lat (usec): min=703, max=33907, avg=17158.82, stdev=827.46 00:31:50.317 clat percentiles (usec): 00:31:50.317 | 1.00th=[16188], 5.00th=[16450], 10.00th=[16581], 20.00th=[16712], 00:31:50.317 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17171], 00:31:50.317 | 70.00th=[17433], 80.00th=[17433], 90.00th=[17695], 95.00th=[17957], 00:31:50.317 | 99.00th=[20317], 99.50th=[20579], 99.90th=[25297], 99.95th=[29754], 00:31:50.317 | 99.99th=[33162] 00:31:50.317 write: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(256MiB/4812msec); 0 zone resets 00:31:50.317 slat (usec): min=4, max=561, avg= 8.36, stdev= 4.98 00:31:50.317 clat (usec): min=590, max=55668, avg=9348.55, stdev=11350.90 00:31:50.317 lat (usec): min=599, max=55675, avg=9356.92, stdev=11350.91 00:31:50.317 clat percentiles (usec): 00:31:50.317 | 1.00th=[ 857], 5.00th=[ 996], 10.00th=[ 1090], 20.00th=[ 1237], 00:31:50.317 | 30.00th=[ 1418], 40.00th=[ 1795], 50.00th=[ 6587], 60.00th=[ 7504], 00:31:50.317 | 70.00th=[ 8455], 80.00th=[10159], 90.00th=[33817], 95.00th=[35390], 00:31:50.317 | 99.00th=[37487], 99.50th=[38011], 99.90th=[43254], 99.95th=[47449], 00:31:50.317 | 99.99th=[53216] 00:31:50.317 bw ( KiB/s): min=27912, max=70890, per=96.21%, avg=52414.60, stdev=11318.19, samples=10 00:31:50.317 iops : min= 6978, max=17722, avg=13103.60, stdev=2829.46, samples=10 00:31:50.317 lat (usec) : 750=0.11%, 1000=2.45% 00:31:50.317 lat (msec) : 2=17.91%, 4=0.64%, 10=18.75%, 20=51.32%, 50=8.79% 00:31:50.317 lat (msec) : 100=0.01% 00:31:50.317 cpu : usr=99.02%, sys=0.26%, ctx=18, majf=0, minf=5565 00:31:50.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:50.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.317 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.317 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.317 00:31:50.317 Run status group 0 (all jobs): 00:31:50.317 READ: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=255MiB (267MB), run=8743-8743msec 00:31:50.317 WRITE: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=256MiB (268MB), run=4812-4812msec 00:31:50.884 ----------------------------------------------------- 00:31:50.884 Suppressions used: 00:31:50.884 count bytes template 00:31:50.884 1 5 /usr/src/fio/parse.c 00:31:50.884 2 192 /usr/src/fio/iolog.c 00:31:50.884 1 8 libtcmalloc_minimal.so 00:31:50.884 1 904 libcrypto.so 00:31:50.884 ----------------------------------------------------- 00:31:50.884 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:31:50.884 Remove shared memory files 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58408 /dev/shm/spdk_tgt_trace.pid74187 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:50.884 18:59:19 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:31:51.144 00:31:51.144 real 1m10.566s 00:31:51.144 user 2m31.479s 00:31:51.144 sys 0m4.095s 00:31:51.144 18:59:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:51.144 18:59:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:31:51.144 ************************************ 00:31:51.144 END TEST ftl_fio_basic 00:31:51.144 ************************************ 00:31:51.144 18:59:19 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:31:51.144 18:59:19 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:51.144 18:59:19 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:51.144 18:59:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:51.144 ************************************ 00:31:51.144 START TEST ftl_bdevperf 00:31:51.144 ************************************ 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:31:51.144 * Looking for test storage... 00:31:51.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:51.144 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:51.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.404 --rc genhtml_branch_coverage=1 00:31:51.404 --rc genhtml_function_coverage=1 00:31:51.404 --rc genhtml_legend=1 00:31:51.404 --rc geninfo_all_blocks=1 00:31:51.404 --rc geninfo_unexecuted_blocks=1 00:31:51.404 00:31:51.404 ' 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:51.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.404 --rc genhtml_branch_coverage=1 00:31:51.404 --rc genhtml_function_coverage=1 00:31:51.404 --rc genhtml_legend=1 00:31:51.404 --rc geninfo_all_blocks=1 00:31:51.404 --rc geninfo_unexecuted_blocks=1 00:31:51.404 00:31:51.404 ' 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:51.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.404 --rc genhtml_branch_coverage=1 00:31:51.404 --rc genhtml_function_coverage=1 00:31:51.404 --rc genhtml_legend=1 00:31:51.404 --rc geninfo_all_blocks=1 00:31:51.404 --rc geninfo_unexecuted_blocks=1 00:31:51.404 00:31:51.404 ' 00:31:51.404 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:51.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:51.405 --rc genhtml_branch_coverage=1 00:31:51.405 --rc genhtml_function_coverage=1 00:31:51.405 --rc genhtml_legend=1 00:31:51.405 --rc geninfo_all_blocks=1 00:31:51.405 --rc geninfo_unexecuted_blocks=1 00:31:51.405 00:31:51.405 ' 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76200 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76200 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 76200 ']' 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:51.405 18:59:19 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:51.405 [2024-10-08 18:59:20.046700] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:31:51.405 [2024-10-08 18:59:20.046882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76200 ] 00:31:51.664 [2024-10-08 18:59:20.236043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.923 [2024-10-08 18:59:20.531213] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.490 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:52.490 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:31:52.490 18:59:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:52.490 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:31:52.490 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:52.491 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:31:52.491 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:31:52.491 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:52.749 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:52.749 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:31:52.749 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:52.749 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:31:52.749 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:52.749 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:31:52.749 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:31:52.749 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:53.009 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:53.009 { 00:31:53.009 "name": "nvme0n1", 00:31:53.009 "aliases": [ 00:31:53.009 "e5fbceb6-352d-4cc8-9390-2ee93768a71f" 00:31:53.009 ], 00:31:53.009 "product_name": "NVMe disk", 00:31:53.009 "block_size": 4096, 00:31:53.009 "num_blocks": 1310720, 00:31:53.009 "uuid": "e5fbceb6-352d-4cc8-9390-2ee93768a71f", 00:31:53.009 "numa_id": -1, 00:31:53.009 "assigned_rate_limits": { 00:31:53.009 "rw_ios_per_sec": 0, 00:31:53.009 "rw_mbytes_per_sec": 0, 00:31:53.009 "r_mbytes_per_sec": 0, 00:31:53.009 "w_mbytes_per_sec": 0 00:31:53.009 }, 00:31:53.009 "claimed": true, 00:31:53.009 "claim_type": "read_many_write_one", 00:31:53.009 "zoned": false, 00:31:53.009 "supported_io_types": { 00:31:53.009 "read": true, 00:31:53.009 "write": true, 00:31:53.009 "unmap": true, 00:31:53.009 "flush": true, 00:31:53.009 "reset": true, 00:31:53.009 "nvme_admin": true, 00:31:53.009 "nvme_io": true, 00:31:53.009 "nvme_io_md": false, 00:31:53.009 "write_zeroes": true, 00:31:53.009 "zcopy": false, 00:31:53.009 "get_zone_info": false, 00:31:53.009 "zone_management": false, 00:31:53.009 "zone_append": false, 00:31:53.009 "compare": true, 00:31:53.009 "compare_and_write": false, 00:31:53.009 "abort": true, 00:31:53.009 "seek_hole": false, 00:31:53.009 "seek_data": false, 00:31:53.009 "copy": true, 00:31:53.009 "nvme_iov_md": false 00:31:53.009 }, 00:31:53.009 "driver_specific": { 00:31:53.009 "nvme": [ 00:31:53.009 { 00:31:53.009 "pci_address": "0000:00:11.0", 00:31:53.009 "trid": { 00:31:53.009 "trtype": "PCIe", 00:31:53.009 "traddr": "0000:00:11.0" 00:31:53.009 }, 00:31:53.009 "ctrlr_data": { 00:31:53.009 "cntlid": 0, 00:31:53.009 "vendor_id": "0x1b36", 00:31:53.009 "model_number": "QEMU NVMe Ctrl", 00:31:53.009 "serial_number": "12341", 00:31:53.009 "firmware_revision": "8.0.0", 00:31:53.009 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:53.009 "oacs": { 00:31:53.009 "security": 0, 00:31:53.009 "format": 1, 00:31:53.009 "firmware": 0, 00:31:53.009 "ns_manage": 1 00:31:53.009 }, 00:31:53.009 "multi_ctrlr": false, 00:31:53.009 "ana_reporting": false 00:31:53.009 }, 00:31:53.009 "vs": { 00:31:53.009 "nvme_version": "1.4" 00:31:53.009 }, 00:31:53.009 "ns_data": { 00:31:53.009 "id": 1, 00:31:53.009 "can_share": false 00:31:53.009 } 00:31:53.009 } 00:31:53.009 ], 00:31:53.009 "mp_policy": "active_passive" 00:31:53.009 } 00:31:53.009 } 00:31:53.009 ]' 00:31:53.009 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:53.268 18:59:21 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:53.528 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=8a22bd17-dcdc-4890-a439-ddf2025ebc5d 00:31:53.528 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:31:53.528 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a22bd17-dcdc-4890-a439-ddf2025ebc5d 00:31:53.787 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:54.046 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c824ec61-7b84-48a8-9b16-2478d31770db 00:31:54.046 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c824ec61-7b84-48a8-9b16-2478d31770db 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:31:54.305 18:59:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:54.305 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:54.305 { 00:31:54.305 "name": "6a570b42-5865-49a8-a051-fbe284f51bc0", 00:31:54.305 "aliases": [ 00:31:54.305 "lvs/nvme0n1p0" 00:31:54.305 ], 00:31:54.305 "product_name": "Logical Volume", 00:31:54.305 "block_size": 4096, 00:31:54.305 "num_blocks": 26476544, 00:31:54.305 "uuid": "6a570b42-5865-49a8-a051-fbe284f51bc0", 00:31:54.305 "assigned_rate_limits": { 00:31:54.305 "rw_ios_per_sec": 0, 00:31:54.305 "rw_mbytes_per_sec": 0, 00:31:54.305 "r_mbytes_per_sec": 0, 00:31:54.305 "w_mbytes_per_sec": 0 00:31:54.305 }, 00:31:54.305 "claimed": false, 00:31:54.305 "zoned": false, 00:31:54.306 "supported_io_types": { 00:31:54.306 "read": true, 00:31:54.306 "write": true, 00:31:54.306 "unmap": true, 00:31:54.306 "flush": false, 00:31:54.306 "reset": true, 00:31:54.306 "nvme_admin": false, 00:31:54.306 "nvme_io": false, 00:31:54.306 "nvme_io_md": false, 00:31:54.306 "write_zeroes": true, 00:31:54.306 "zcopy": false, 00:31:54.306 "get_zone_info": false, 00:31:54.306 "zone_management": false, 00:31:54.306 "zone_append": false, 00:31:54.306 "compare": false, 00:31:54.306 "compare_and_write": false, 00:31:54.306 "abort": false, 00:31:54.306 "seek_hole": true, 00:31:54.306 "seek_data": true, 00:31:54.306 "copy": false, 00:31:54.306 "nvme_iov_md": false 00:31:54.306 }, 00:31:54.306 "driver_specific": { 00:31:54.306 "lvol": { 00:31:54.306 "lvol_store_uuid": "c824ec61-7b84-48a8-9b16-2478d31770db", 00:31:54.306 "base_bdev": "nvme0n1", 00:31:54.306 "thin_provision": true, 00:31:54.306 "num_allocated_clusters": 0, 00:31:54.306 "snapshot": false, 00:31:54.306 "clone": false, 00:31:54.306 "esnap_clone": false 00:31:54.306 } 00:31:54.306 } 00:31:54.306 } 00:31:54.306 ]' 00:31:54.306 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:54.564 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:31:54.564 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:54.564 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:54.564 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:54.564 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:31:54.564 18:59:23 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:31:54.564 18:59:23 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:31:54.564 18:59:23 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:54.823 18:59:23 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:54.823 18:59:23 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:54.823 18:59:23 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:54.823 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:54.823 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:54.823 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:31:54.823 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:31:54.823 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:55.082 { 00:31:55.082 "name": "6a570b42-5865-49a8-a051-fbe284f51bc0", 00:31:55.082 "aliases": [ 00:31:55.082 "lvs/nvme0n1p0" 00:31:55.082 ], 00:31:55.082 "product_name": "Logical Volume", 00:31:55.082 "block_size": 4096, 00:31:55.082 "num_blocks": 26476544, 00:31:55.082 "uuid": "6a570b42-5865-49a8-a051-fbe284f51bc0", 00:31:55.082 "assigned_rate_limits": { 00:31:55.082 "rw_ios_per_sec": 0, 00:31:55.082 "rw_mbytes_per_sec": 0, 00:31:55.082 "r_mbytes_per_sec": 0, 00:31:55.082 "w_mbytes_per_sec": 0 00:31:55.082 }, 00:31:55.082 "claimed": false, 00:31:55.082 "zoned": false, 00:31:55.082 "supported_io_types": { 00:31:55.082 "read": true, 00:31:55.082 "write": true, 00:31:55.082 "unmap": true, 00:31:55.082 "flush": false, 00:31:55.082 "reset": true, 00:31:55.082 "nvme_admin": false, 00:31:55.082 "nvme_io": false, 00:31:55.082 "nvme_io_md": false, 00:31:55.082 "write_zeroes": true, 00:31:55.082 "zcopy": false, 00:31:55.082 "get_zone_info": false, 00:31:55.082 "zone_management": false, 00:31:55.082 "zone_append": false, 00:31:55.082 "compare": false, 00:31:55.082 "compare_and_write": false, 00:31:55.082 "abort": false, 00:31:55.082 "seek_hole": true, 00:31:55.082 "seek_data": true, 00:31:55.082 "copy": false, 00:31:55.082 "nvme_iov_md": false 00:31:55.082 }, 00:31:55.082 "driver_specific": { 00:31:55.082 "lvol": { 00:31:55.082 "lvol_store_uuid": "c824ec61-7b84-48a8-9b16-2478d31770db", 00:31:55.082 "base_bdev": "nvme0n1", 00:31:55.082 "thin_provision": true, 00:31:55.082 "num_allocated_clusters": 0, 00:31:55.082 "snapshot": false, 00:31:55.082 "clone": false, 00:31:55.082 "esnap_clone": false 00:31:55.082 } 00:31:55.082 } 00:31:55.082 } 00:31:55.082 ]' 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:31:55.082 18:59:23 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:55.341 18:59:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:31:55.341 18:59:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:55.341 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:55.341 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:55.341 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:31:55.341 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:31:55.341 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a570b42-5865-49a8-a051-fbe284f51bc0 00:31:55.601 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:55.601 { 00:31:55.601 "name": "6a570b42-5865-49a8-a051-fbe284f51bc0", 00:31:55.601 "aliases": [ 00:31:55.601 "lvs/nvme0n1p0" 00:31:55.601 ], 00:31:55.601 "product_name": "Logical Volume", 00:31:55.601 "block_size": 4096, 00:31:55.601 "num_blocks": 26476544, 00:31:55.601 "uuid": "6a570b42-5865-49a8-a051-fbe284f51bc0", 00:31:55.601 "assigned_rate_limits": { 00:31:55.601 "rw_ios_per_sec": 0, 00:31:55.601 "rw_mbytes_per_sec": 0, 00:31:55.601 "r_mbytes_per_sec": 0, 00:31:55.601 "w_mbytes_per_sec": 0 00:31:55.601 }, 00:31:55.601 "claimed": false, 00:31:55.601 "zoned": false, 00:31:55.601 "supported_io_types": { 00:31:55.601 "read": true, 00:31:55.601 "write": true, 00:31:55.601 "unmap": true, 00:31:55.601 "flush": false, 00:31:55.601 "reset": true, 00:31:55.601 "nvme_admin": false, 00:31:55.601 "nvme_io": false, 00:31:55.601 "nvme_io_md": false, 00:31:55.601 "write_zeroes": true, 00:31:55.601 "zcopy": false, 00:31:55.601 "get_zone_info": false, 00:31:55.601 "zone_management": false, 00:31:55.601 "zone_append": false, 00:31:55.601 "compare": false, 00:31:55.601 "compare_and_write": false, 00:31:55.601 "abort": false, 00:31:55.601 "seek_hole": true, 00:31:55.601 "seek_data": true, 00:31:55.601 "copy": false, 00:31:55.601 "nvme_iov_md": false 00:31:55.601 }, 00:31:55.601 "driver_specific": { 00:31:55.601 "lvol": { 00:31:55.601 "lvol_store_uuid": "c824ec61-7b84-48a8-9b16-2478d31770db", 00:31:55.601 "base_bdev": "nvme0n1", 00:31:55.601 "thin_provision": true, 00:31:55.601 "num_allocated_clusters": 0, 00:31:55.601 "snapshot": false, 00:31:55.601 "clone": false, 00:31:55.601 "esnap_clone": false 00:31:55.601 } 00:31:55.601 } 00:31:55.601 } 00:31:55.601 ]' 00:31:55.601 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:55.860 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:31:55.860 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:55.860 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:55.860 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:55.860 18:59:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:31:55.860 18:59:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:31:55.860 18:59:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6a570b42-5865-49a8-a051-fbe284f51bc0 -c nvc0n1p0 --l2p_dram_limit 20 00:31:55.860 [2024-10-08 18:59:24.573381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.860 [2024-10-08 18:59:24.573455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:55.860 [2024-10-08 18:59:24.573473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:55.860 [2024-10-08 18:59:24.573487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.860 [2024-10-08 18:59:24.573552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.573568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:55.861 [2024-10-08 18:59:24.573579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:55.861 [2024-10-08 18:59:24.573592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.573610] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:55.861 [2024-10-08 18:59:24.574632] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:55.861 [2024-10-08 18:59:24.574668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.574682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:55.861 [2024-10-08 18:59:24.574694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:31:55.861 [2024-10-08 18:59:24.574707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.574787] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 18886a85-1cf8-48ee-ba58-03b1ae60bfea 00:31:55.861 [2024-10-08 18:59:24.576437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.576473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:55.861 [2024-10-08 18:59:24.576492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:31:55.861 [2024-10-08 18:59:24.576503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.584124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.584163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:55.861 [2024-10-08 18:59:24.584179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.572 ms 00:31:55.861 [2024-10-08 18:59:24.584190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.584292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.584307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:55.861 [2024-10-08 18:59:24.584326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:31:55.861 [2024-10-08 18:59:24.584336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.584388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.584400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:55.861 [2024-10-08 18:59:24.584417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:55.861 [2024-10-08 18:59:24.584427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.584454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:55.861 [2024-10-08 18:59:24.589901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.589955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:55.861 [2024-10-08 18:59:24.589967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.457 ms 00:31:55.861 [2024-10-08 18:59:24.589988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.590021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.590035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:55.861 [2024-10-08 18:59:24.590046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:55.861 [2024-10-08 18:59:24.590060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.590110] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:55.861 [2024-10-08 18:59:24.590246] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:55.861 [2024-10-08 18:59:24.590260] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:55.861 [2024-10-08 18:59:24.590276] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:55.861 [2024-10-08 18:59:24.590290] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590305] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590316] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:55.861 [2024-10-08 18:59:24.590331] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:55.861 [2024-10-08 18:59:24.590342] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:55.861 [2024-10-08 18:59:24.590354] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:55.861 [2024-10-08 18:59:24.590364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.590378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:55.861 [2024-10-08 18:59:24.590388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:31:55.861 [2024-10-08 18:59:24.590401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.590472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.861 [2024-10-08 18:59:24.590488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:55.861 [2024-10-08 18:59:24.590498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:55.861 [2024-10-08 18:59:24.590513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.861 [2024-10-08 18:59:24.590598] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:55.861 [2024-10-08 18:59:24.590618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:55.861 [2024-10-08 18:59:24.590630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:55.861 [2024-10-08 18:59:24.590665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:55.861 [2024-10-08 18:59:24.590697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:55.861 [2024-10-08 18:59:24.590718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:55.861 [2024-10-08 18:59:24.590741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:55.861 [2024-10-08 18:59:24.590750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:55.861 [2024-10-08 18:59:24.590762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:55.861 [2024-10-08 18:59:24.590772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:55.861 [2024-10-08 18:59:24.590786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:55.861 [2024-10-08 18:59:24.590808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:55.861 [2024-10-08 18:59:24.590841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:55.861 [2024-10-08 18:59:24.590874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:55.861 [2024-10-08 18:59:24.590905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:55.861 [2024-10-08 18:59:24.590938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:55.861 [2024-10-08 18:59:24.590971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:55.861 [2024-10-08 18:59:24.590981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:55.861 [2024-10-08 18:59:24.590993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:55.861 [2024-10-08 18:59:24.591003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:55.861 [2024-10-08 18:59:24.591014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:55.861 [2024-10-08 18:59:24.591024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:55.861 [2024-10-08 18:59:24.591039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:55.861 [2024-10-08 18:59:24.591048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:55.861 [2024-10-08 18:59:24.591060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.861 [2024-10-08 18:59:24.591069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:55.861 [2024-10-08 18:59:24.591082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:55.861 [2024-10-08 18:59:24.591091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.861 [2024-10-08 18:59:24.591102] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:55.861 [2024-10-08 18:59:24.591112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:55.861 [2024-10-08 18:59:24.591125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:55.861 [2024-10-08 18:59:24.591135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:55.862 [2024-10-08 18:59:24.591152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:55.862 [2024-10-08 18:59:24.591161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:55.862 [2024-10-08 18:59:24.591174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:55.862 [2024-10-08 18:59:24.591184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:55.862 [2024-10-08 18:59:24.591195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:55.862 [2024-10-08 18:59:24.591205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:55.862 [2024-10-08 18:59:24.591221] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:55.862 [2024-10-08 18:59:24.591237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:55.862 [2024-10-08 18:59:24.591251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:55.862 [2024-10-08 18:59:24.591262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:55.862 [2024-10-08 18:59:24.591275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:55.862 [2024-10-08 18:59:24.591286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:55.862 [2024-10-08 18:59:24.591299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:55.862 [2024-10-08 18:59:24.591310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:55.862 [2024-10-08 18:59:24.591323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:55.862 [2024-10-08 18:59:24.591333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:55.862 [2024-10-08 18:59:24.591349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:55.862 [2024-10-08 18:59:24.591359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:55.862 [2024-10-08 18:59:24.591372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:55.862 [2024-10-08 18:59:24.591383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:55.862 [2024-10-08 18:59:24.591396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:55.862 [2024-10-08 18:59:24.591406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:55.862 [2024-10-08 18:59:24.591422] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:55.862 [2024-10-08 18:59:24.591442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:55.862 [2024-10-08 18:59:24.591457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:55.862 [2024-10-08 18:59:24.591468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:55.862 [2024-10-08 18:59:24.591481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:55.862 [2024-10-08 18:59:24.591491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:55.862 [2024-10-08 18:59:24.591504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.862 [2024-10-08 18:59:24.591515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:55.862 [2024-10-08 18:59:24.591528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:31:55.862 [2024-10-08 18:59:24.591538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.862 [2024-10-08 18:59:24.591580] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:55.862 [2024-10-08 18:59:24.591593] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:59.157 [2024-10-08 18:59:27.380625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.380697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:59.157 [2024-10-08 18:59:27.380734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2789.019 ms 00:31:59.157 [2024-10-08 18:59:27.380747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.441565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.441621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.157 [2024-10-08 18:59:27.441643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.468 ms 00:31:59.157 [2024-10-08 18:59:27.441655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.441818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.441832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:59.157 [2024-10-08 18:59:27.441851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:59.157 [2024-10-08 18:59:27.441865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.491985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.492040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.157 [2024-10-08 18:59:27.492060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.051 ms 00:31:59.157 [2024-10-08 18:59:27.492077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.492129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.492141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.157 [2024-10-08 18:59:27.492157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:59.157 [2024-10-08 18:59:27.492168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.492693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.492716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.157 [2024-10-08 18:59:27.492732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:31:59.157 [2024-10-08 18:59:27.492743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.492863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.492887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.157 [2024-10-08 18:59:27.492904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:31:59.157 [2024-10-08 18:59:27.492915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.513795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.513846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.157 [2024-10-08 18:59:27.513866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.856 ms 00:31:59.157 [2024-10-08 18:59:27.513878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.528085] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:31:59.157 [2024-10-08 18:59:27.534459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.534520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:59.157 [2024-10-08 18:59:27.534536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.458 ms 00:31:59.157 [2024-10-08 18:59:27.534551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.614277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.614368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:59.157 [2024-10-08 18:59:27.614387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.682 ms 00:31:59.157 [2024-10-08 18:59:27.614402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.614609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.614641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:59.157 [2024-10-08 18:59:27.614653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:31:59.157 [2024-10-08 18:59:27.614666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.656831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.656893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:59.157 [2024-10-08 18:59:27.656910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.083 ms 00:31:59.157 [2024-10-08 18:59:27.656941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.699981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.700045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:59.157 [2024-10-08 18:59:27.700064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.955 ms 00:31:59.157 [2024-10-08 18:59:27.700079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.701053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.701089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:59.157 [2024-10-08 18:59:27.701107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:31:59.157 [2024-10-08 18:59:27.701122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.815516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.815623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:59.157 [2024-10-08 18:59:27.815644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.330 ms 00:31:59.157 [2024-10-08 18:59:27.815660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.860611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.860679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:59.157 [2024-10-08 18:59:27.860697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.855 ms 00:31:59.157 [2024-10-08 18:59:27.860713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.157 [2024-10-08 18:59:27.904249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.157 [2024-10-08 18:59:27.904309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:59.157 [2024-10-08 18:59:27.904326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.485 ms 00:31:59.157 [2024-10-08 18:59:27.904341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.417 [2024-10-08 18:59:27.947966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.417 [2024-10-08 18:59:27.948039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:59.417 [2024-10-08 18:59:27.948058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.577 ms 00:31:59.417 [2024-10-08 18:59:27.948073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.417 [2024-10-08 18:59:27.948130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.417 [2024-10-08 18:59:27.948152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:59.417 [2024-10-08 18:59:27.948167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:59.417 [2024-10-08 18:59:27.948182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.417 [2024-10-08 18:59:27.948301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.417 [2024-10-08 18:59:27.948323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:59.417 [2024-10-08 18:59:27.948335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:59.417 [2024-10-08 18:59:27.948350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.417 [2024-10-08 18:59:27.949589] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3375.666 ms, result 0 00:31:59.417 { 00:31:59.417 "name": "ftl0", 00:31:59.417 "uuid": "18886a85-1cf8-48ee-ba58-03b1ae60bfea" 00:31:59.417 } 00:31:59.417 18:59:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:31:59.417 18:59:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:31:59.417 18:59:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:31:59.675 18:59:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:31:59.675 [2024-10-08 18:59:28.402271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:31:59.675 I/O size of 69632 is greater than zero copy threshold (65536). 00:31:59.675 Zero copy mechanism will not be used. 00:31:59.675 Running I/O for 4 seconds... 00:32:01.985 2155.00 IOPS, 143.11 MiB/s [2024-10-08T18:59:31.676Z] 2216.00 IOPS, 147.16 MiB/s [2024-10-08T18:59:32.611Z] 2247.33 IOPS, 149.24 MiB/s [2024-10-08T18:59:32.611Z] 2241.50 IOPS, 148.85 MiB/s 00:32:03.854 Latency(us) 00:32:03.854 [2024-10-08T18:59:32.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.854 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:32:03.854 ftl0 : 4.00 2240.48 148.78 0.00 0.00 467.76 200.90 2278.16 00:32:03.854 [2024-10-08T18:59:32.611Z] =================================================================================================================== 00:32:03.854 [2024-10-08T18:59:32.611Z] Total : 2240.48 148.78 0.00 0.00 467.76 200.90 2278.16 00:32:03.854 [2024-10-08 18:59:32.415478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:32:03.854 { 00:32:03.854 "results": [ 00:32:03.854 { 00:32:03.854 "job": "ftl0", 00:32:03.854 "core_mask": "0x1", 00:32:03.854 "workload": "randwrite", 00:32:03.854 "status": "finished", 00:32:03.854 "queue_depth": 1, 00:32:03.854 "io_size": 69632, 00:32:03.854 "runtime": 4.002263, 00:32:03.854 "iops": 2240.482447055578, 00:32:03.854 "mibps": 148.78203749978448, 00:32:03.854 "io_failed": 0, 00:32:03.854 "io_timeout": 0, 00:32:03.854 "avg_latency_us": 467.7627780167492, 00:32:03.854 "min_latency_us": 200.89904761904762, 00:32:03.854 "max_latency_us": 2278.1561904761907 00:32:03.854 } 00:32:03.854 ], 00:32:03.854 "core_count": 1 00:32:03.854 } 00:32:03.854 18:59:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:32:03.854 [2024-10-08 18:59:32.556586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:32:03.854 Running I/O for 4 seconds... 00:32:05.854 9602.00 IOPS, 37.51 MiB/s [2024-10-08T18:59:35.983Z] 9692.00 IOPS, 37.86 MiB/s [2024-10-08T18:59:36.919Z] 9422.33 IOPS, 36.81 MiB/s [2024-10-08T18:59:36.919Z] 9562.50 IOPS, 37.35 MiB/s 00:32:08.162 Latency(us) 00:32:08.162 [2024-10-08T18:59:36.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.162 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.162 ftl0 : 4.02 9555.22 37.33 0.00 0.00 13367.32 271.12 28960.67 00:32:08.162 [2024-10-08T18:59:36.919Z] =================================================================================================================== 00:32:08.162 [2024-10-08T18:59:36.919Z] Total : 9555.22 37.33 0.00 0.00 13367.32 0.00 28960.67 00:32:08.162 [2024-10-08 18:59:36.584477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:32:08.162 { 00:32:08.162 "results": [ 00:32:08.162 { 00:32:08.162 "job": "ftl0", 00:32:08.162 "core_mask": "0x1", 00:32:08.162 "workload": "randwrite", 00:32:08.162 "status": "finished", 00:32:08.162 "queue_depth": 128, 00:32:08.162 "io_size": 4096, 00:32:08.162 "runtime": 4.016443, 00:32:08.162 "iops": 9555.220875784868, 00:32:08.162 "mibps": 37.32508154603464, 00:32:08.162 "io_failed": 0, 00:32:08.162 "io_timeout": 0, 00:32:08.162 "avg_latency_us": 13367.321701073779, 00:32:08.162 "min_latency_us": 271.11619047619047, 00:32:08.162 "max_latency_us": 28960.670476190477 00:32:08.162 } 00:32:08.162 ], 00:32:08.162 "core_count": 1 00:32:08.162 } 00:32:08.162 18:59:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:32:08.162 [2024-10-08 18:59:36.737476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:32:08.162 Running I/O for 4 seconds... 00:32:10.028 7628.00 IOPS, 29.80 MiB/s [2024-10-08T18:59:40.157Z] 7676.50 IOPS, 29.99 MiB/s [2024-10-08T18:59:41.091Z] 7686.67 IOPS, 30.03 MiB/s [2024-10-08T18:59:41.091Z] 7719.25 IOPS, 30.15 MiB/s 00:32:12.334 Latency(us) 00:32:12.334 [2024-10-08T18:59:41.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.334 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:12.334 Verification LBA range: start 0x0 length 0x1400000 00:32:12.334 ftl0 : 4.01 7730.59 30.20 0.00 0.00 16505.89 286.72 20097.71 00:32:12.334 [2024-10-08T18:59:41.091Z] =================================================================================================================== 00:32:12.334 [2024-10-08T18:59:41.091Z] Total : 7730.59 30.20 0.00 0.00 16505.89 0.00 20097.71 00:32:12.334 [2024-10-08 18:59:40.767866] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:32:12.334 { 00:32:12.334 "results": [ 00:32:12.334 { 00:32:12.334 "job": "ftl0", 00:32:12.334 "core_mask": "0x1", 00:32:12.334 "workload": "verify", 00:32:12.334 "status": "finished", 00:32:12.334 "verify_range": { 00:32:12.334 "start": 0, 00:32:12.334 "length": 20971520 00:32:12.334 }, 00:32:12.334 "queue_depth": 128, 00:32:12.334 "io_size": 4096, 00:32:12.334 "runtime": 4.010562, 00:32:12.334 "iops": 7730.587384012515, 00:32:12.334 "mibps": 30.197606968798887, 00:32:12.334 "io_failed": 0, 00:32:12.334 "io_timeout": 0, 00:32:12.334 "avg_latency_us": 16505.889757757832, 00:32:12.334 "min_latency_us": 286.72, 00:32:12.334 "max_latency_us": 20097.706666666665 00:32:12.334 } 00:32:12.334 ], 00:32:12.334 "core_count": 1 00:32:12.334 } 00:32:12.334 18:59:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:32:12.334 [2024-10-08 18:59:40.984037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.334 [2024-10-08 18:59:40.984100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:12.334 [2024-10-08 18:59:40.984118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:12.334 [2024-10-08 18:59:40.984132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.334 [2024-10-08 18:59:40.984159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:12.334 [2024-10-08 18:59:40.988515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.334 [2024-10-08 18:59:40.988548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:12.334 [2024-10-08 18:59:40.988565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.334 ms 00:32:12.334 [2024-10-08 18:59:40.988576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.334 [2024-10-08 18:59:40.990467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.334 [2024-10-08 18:59:40.990508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:12.334 [2024-10-08 18:59:40.990527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:32:12.334 [2024-10-08 18:59:40.990539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.593 [2024-10-08 18:59:41.157551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.593 [2024-10-08 18:59:41.157628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:12.593 [2024-10-08 18:59:41.157655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 166.974 ms 00:32:12.593 [2024-10-08 18:59:41.157668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.593 [2024-10-08 18:59:41.163219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.593 [2024-10-08 18:59:41.163260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:12.593 [2024-10-08 18:59:41.163275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.504 ms 00:32:12.593 [2024-10-08 18:59:41.163285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.593 [2024-10-08 18:59:41.202807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.593 [2024-10-08 18:59:41.202859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:12.593 [2024-10-08 18:59:41.202878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.463 ms 00:32:12.593 [2024-10-08 18:59:41.202889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.593 [2024-10-08 18:59:41.226317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.593 [2024-10-08 18:59:41.226372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:12.593 [2024-10-08 18:59:41.226391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.376 ms 00:32:12.593 [2024-10-08 18:59:41.226403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.593 [2024-10-08 18:59:41.226560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.593 [2024-10-08 18:59:41.226575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:12.593 [2024-10-08 18:59:41.226592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:32:12.593 [2024-10-08 18:59:41.226606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.593 [2024-10-08 18:59:41.264839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.593 [2024-10-08 18:59:41.264889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:12.593 [2024-10-08 18:59:41.264906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.209 ms 00:32:12.593 [2024-10-08 18:59:41.264917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.593 [2024-10-08 18:59:41.302016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.593 [2024-10-08 18:59:41.302060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:12.593 [2024-10-08 18:59:41.302078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.042 ms 00:32:12.593 [2024-10-08 18:59:41.302088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.593 [2024-10-08 18:59:41.340173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.593 [2024-10-08 18:59:41.340223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:12.593 [2024-10-08 18:59:41.340241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.036 ms 00:32:12.593 [2024-10-08 18:59:41.340252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.852 [2024-10-08 18:59:41.377926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.852 [2024-10-08 18:59:41.377985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:12.852 [2024-10-08 18:59:41.378006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.569 ms 00:32:12.852 [2024-10-08 18:59:41.378017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.852 [2024-10-08 18:59:41.378061] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:12.852 [2024-10-08 18:59:41.378080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.378987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.379000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.379012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.379025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.379036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.379050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.379061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.379077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:12.852 [2024-10-08 18:59:41.379088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:12.853 [2024-10-08 18:59:41.379366] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:12.853 [2024-10-08 18:59:41.379379] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18886a85-1cf8-48ee-ba58-03b1ae60bfea 00:32:12.853 [2024-10-08 18:59:41.379390] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:12.853 [2024-10-08 18:59:41.379403] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:12.853 [2024-10-08 18:59:41.379413] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:12.853 [2024-10-08 18:59:41.379426] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:12.853 [2024-10-08 18:59:41.379443] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:12.853 [2024-10-08 18:59:41.379456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:12.853 [2024-10-08 18:59:41.379466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:12.853 [2024-10-08 18:59:41.379480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:12.853 [2024-10-08 18:59:41.379489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:12.853 [2024-10-08 18:59:41.379501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.853 [2024-10-08 18:59:41.379512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:12.853 [2024-10-08 18:59:41.379525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:32:12.853 [2024-10-08 18:59:41.379538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.853 [2024-10-08 18:59:41.400228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.853 [2024-10-08 18:59:41.400273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:12.853 [2024-10-08 18:59:41.400290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.629 ms 00:32:12.853 [2024-10-08 18:59:41.400301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.853 [2024-10-08 18:59:41.400811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:12.853 [2024-10-08 18:59:41.400831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:12.853 [2024-10-08 18:59:41.400845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:32:12.853 [2024-10-08 18:59:41.400855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.853 [2024-10-08 18:59:41.450019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.853 [2024-10-08 18:59:41.450073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:12.853 [2024-10-08 18:59:41.450093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.853 [2024-10-08 18:59:41.450104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.853 [2024-10-08 18:59:41.450168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.853 [2024-10-08 18:59:41.450182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:12.853 [2024-10-08 18:59:41.450196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.853 [2024-10-08 18:59:41.450206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.853 [2024-10-08 18:59:41.450294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.853 [2024-10-08 18:59:41.450307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:12.853 [2024-10-08 18:59:41.450320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.853 [2024-10-08 18:59:41.450331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.853 [2024-10-08 18:59:41.450351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.853 [2024-10-08 18:59:41.450362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:12.853 [2024-10-08 18:59:41.450375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.853 [2024-10-08 18:59:41.450388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:12.853 [2024-10-08 18:59:41.578953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:12.853 [2024-10-08 18:59:41.579031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:12.853 [2024-10-08 18:59:41.579053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:12.853 [2024-10-08 18:59:41.579064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.111 [2024-10-08 18:59:41.684024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.112 [2024-10-08 18:59:41.684082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:13.112 [2024-10-08 18:59:41.684104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.112 [2024-10-08 18:59:41.684115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.112 [2024-10-08 18:59:41.684238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.112 [2024-10-08 18:59:41.684251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:13.112 [2024-10-08 18:59:41.684265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.112 [2024-10-08 18:59:41.684275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.112 [2024-10-08 18:59:41.684380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.112 [2024-10-08 18:59:41.684393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:13.112 [2024-10-08 18:59:41.684425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.112 [2024-10-08 18:59:41.684436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.112 [2024-10-08 18:59:41.684575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.112 [2024-10-08 18:59:41.684589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:13.112 [2024-10-08 18:59:41.684605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.112 [2024-10-08 18:59:41.684615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.112 [2024-10-08 18:59:41.684653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.112 [2024-10-08 18:59:41.684666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:13.112 [2024-10-08 18:59:41.684679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.112 [2024-10-08 18:59:41.684690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.112 [2024-10-08 18:59:41.684732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.112 [2024-10-08 18:59:41.684744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:13.112 [2024-10-08 18:59:41.684758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.112 [2024-10-08 18:59:41.684767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.112 [2024-10-08 18:59:41.684814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.112 [2024-10-08 18:59:41.684826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:13.112 [2024-10-08 18:59:41.684839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.112 [2024-10-08 18:59:41.684849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.112 [2024-10-08 18:59:41.685003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 700.897 ms, result 0 00:32:13.112 true 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76200 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 76200 ']' 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 76200 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76200 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:13.112 killing process with pid 76200 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76200' 00:32:13.112 Received shutdown signal, test time was about 4.000000 seconds 00:32:13.112 00:32:13.112 Latency(us) 00:32:13.112 [2024-10-08T18:59:41.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.112 [2024-10-08T18:59:41.869Z] =================================================================================================================== 00:32:13.112 [2024-10-08T18:59:41.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 76200 00:32:13.112 18:59:41 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 76200 00:32:17.358 Remove shared memory files 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:32:17.358 ************************************ 00:32:17.358 END TEST ftl_bdevperf 00:32:17.358 ************************************ 00:32:17.358 00:32:17.358 real 0m25.908s 00:32:17.358 user 0m29.195s 00:32:17.358 sys 0m1.385s 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:17.358 18:59:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:17.358 18:59:45 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:32:17.358 18:59:45 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:17.358 18:59:45 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:17.358 18:59:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:17.358 ************************************ 00:32:17.359 START TEST ftl_trim 00:32:17.359 ************************************ 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:32:17.359 * Looking for test storage... 00:32:17.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.359 18:59:45 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:17.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.359 --rc genhtml_branch_coverage=1 00:32:17.359 --rc genhtml_function_coverage=1 00:32:17.359 --rc genhtml_legend=1 00:32:17.359 --rc geninfo_all_blocks=1 00:32:17.359 --rc geninfo_unexecuted_blocks=1 00:32:17.359 00:32:17.359 ' 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:17.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.359 --rc genhtml_branch_coverage=1 00:32:17.359 --rc genhtml_function_coverage=1 00:32:17.359 --rc genhtml_legend=1 00:32:17.359 --rc geninfo_all_blocks=1 00:32:17.359 --rc geninfo_unexecuted_blocks=1 00:32:17.359 00:32:17.359 ' 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:17.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.359 --rc genhtml_branch_coverage=1 00:32:17.359 --rc genhtml_function_coverage=1 00:32:17.359 --rc genhtml_legend=1 00:32:17.359 --rc geninfo_all_blocks=1 00:32:17.359 --rc geninfo_unexecuted_blocks=1 00:32:17.359 00:32:17.359 ' 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:17.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.359 --rc genhtml_branch_coverage=1 00:32:17.359 --rc genhtml_function_coverage=1 00:32:17.359 --rc genhtml_legend=1 00:32:17.359 --rc geninfo_all_blocks=1 00:32:17.359 --rc geninfo_unexecuted_blocks=1 00:32:17.359 00:32:17.359 ' 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76558 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76558 00:32:17.359 18:59:45 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76558 ']' 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.359 18:59:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:17.359 [2024-10-08 18:59:46.028577] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:17.359 [2024-10-08 18:59:46.028995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76558 ] 00:32:17.618 [2024-10-08 18:59:46.216136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:17.877 [2024-10-08 18:59:46.444283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.877 [2024-10-08 18:59:46.444417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.877 [2024-10-08 18:59:46.444451] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.814 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:18.814 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:32:18.814 18:59:47 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:32:18.814 18:59:47 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:32:18.814 18:59:47 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:18.814 18:59:47 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:32:18.814 18:59:47 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:32:18.814 18:59:47 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:19.072 18:59:47 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:32:19.072 18:59:47 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:32:19.072 18:59:47 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:32:19.072 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:32:19.072 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:32:19.072 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:32:19.072 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:32:19.072 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:32:19.331 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:32:19.331 { 00:32:19.331 "name": "nvme0n1", 00:32:19.331 "aliases": [ 00:32:19.331 "e4b3fa9d-bf4b-48a8-b259-5a3cf86ac28f" 00:32:19.331 ], 00:32:19.331 "product_name": "NVMe disk", 00:32:19.331 "block_size": 4096, 00:32:19.331 "num_blocks": 1310720, 00:32:19.331 "uuid": "e4b3fa9d-bf4b-48a8-b259-5a3cf86ac28f", 00:32:19.331 "numa_id": -1, 00:32:19.331 "assigned_rate_limits": { 00:32:19.331 "rw_ios_per_sec": 0, 00:32:19.331 "rw_mbytes_per_sec": 0, 00:32:19.331 "r_mbytes_per_sec": 0, 00:32:19.331 "w_mbytes_per_sec": 0 00:32:19.331 }, 00:32:19.331 "claimed": true, 00:32:19.331 "claim_type": "read_many_write_one", 00:32:19.331 "zoned": false, 00:32:19.331 "supported_io_types": { 00:32:19.331 "read": true, 00:32:19.331 "write": true, 00:32:19.331 "unmap": true, 00:32:19.331 "flush": true, 00:32:19.331 "reset": true, 00:32:19.331 "nvme_admin": true, 00:32:19.331 "nvme_io": true, 00:32:19.331 "nvme_io_md": false, 00:32:19.331 "write_zeroes": true, 00:32:19.331 "zcopy": false, 00:32:19.331 "get_zone_info": false, 00:32:19.331 "zone_management": false, 00:32:19.331 "zone_append": false, 00:32:19.331 "compare": true, 00:32:19.331 "compare_and_write": false, 00:32:19.331 "abort": true, 00:32:19.331 "seek_hole": false, 00:32:19.331 "seek_data": false, 00:32:19.331 "copy": true, 00:32:19.331 "nvme_iov_md": false 00:32:19.331 }, 00:32:19.331 "driver_specific": { 00:32:19.331 "nvme": [ 00:32:19.331 { 00:32:19.331 "pci_address": "0000:00:11.0", 00:32:19.331 "trid": { 00:32:19.331 "trtype": "PCIe", 00:32:19.331 "traddr": "0000:00:11.0" 00:32:19.331 }, 00:32:19.331 "ctrlr_data": { 00:32:19.331 "cntlid": 0, 00:32:19.331 "vendor_id": "0x1b36", 00:32:19.331 "model_number": "QEMU NVMe Ctrl", 00:32:19.331 "serial_number": "12341", 00:32:19.331 "firmware_revision": "8.0.0", 00:32:19.331 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:19.331 "oacs": { 00:32:19.331 "security": 0, 00:32:19.331 "format": 1, 00:32:19.331 "firmware": 0, 00:32:19.331 "ns_manage": 1 00:32:19.331 }, 00:32:19.331 "multi_ctrlr": false, 00:32:19.331 "ana_reporting": false 00:32:19.331 }, 00:32:19.331 "vs": { 00:32:19.331 "nvme_version": "1.4" 00:32:19.331 }, 00:32:19.331 "ns_data": { 00:32:19.331 "id": 1, 00:32:19.331 "can_share": false 00:32:19.331 } 00:32:19.331 } 00:32:19.331 ], 00:32:19.331 "mp_policy": "active_passive" 00:32:19.331 } 00:32:19.331 } 00:32:19.331 ]' 00:32:19.331 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:32:19.331 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:32:19.331 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:32:19.331 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:32:19.331 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:32:19.331 18:59:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:32:19.331 18:59:47 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:32:19.331 18:59:47 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:32:19.331 18:59:47 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:32:19.331 18:59:47 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:19.331 18:59:47 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:19.591 18:59:48 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c824ec61-7b84-48a8-9b16-2478d31770db 00:32:19.591 18:59:48 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:32:19.591 18:59:48 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c824ec61-7b84-48a8-9b16-2478d31770db 00:32:19.849 18:59:48 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=427e7d37-7acd-47df-8156-30d75c7c5066 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 427e7d37-7acd-47df-8156-30d75c7c5066 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:32:20.108 18:59:48 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:20.108 18:59:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:20.108 18:59:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:32:20.108 18:59:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:32:20.108 18:59:48 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:32:20.109 18:59:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:20.368 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:32:20.368 { 00:32:20.368 "name": "42fa695a-4e38-4c49-9636-c4afb6a0806a", 00:32:20.368 "aliases": [ 00:32:20.368 "lvs/nvme0n1p0" 00:32:20.368 ], 00:32:20.368 "product_name": "Logical Volume", 00:32:20.368 "block_size": 4096, 00:32:20.368 "num_blocks": 26476544, 00:32:20.368 "uuid": "42fa695a-4e38-4c49-9636-c4afb6a0806a", 00:32:20.368 "assigned_rate_limits": { 00:32:20.368 "rw_ios_per_sec": 0, 00:32:20.368 "rw_mbytes_per_sec": 0, 00:32:20.368 "r_mbytes_per_sec": 0, 00:32:20.368 "w_mbytes_per_sec": 0 00:32:20.368 }, 00:32:20.368 "claimed": false, 00:32:20.368 "zoned": false, 00:32:20.368 "supported_io_types": { 00:32:20.368 "read": true, 00:32:20.368 "write": true, 00:32:20.368 "unmap": true, 00:32:20.368 "flush": false, 00:32:20.368 "reset": true, 00:32:20.368 "nvme_admin": false, 00:32:20.368 "nvme_io": false, 00:32:20.368 "nvme_io_md": false, 00:32:20.368 "write_zeroes": true, 00:32:20.368 "zcopy": false, 00:32:20.368 "get_zone_info": false, 00:32:20.368 "zone_management": false, 00:32:20.368 "zone_append": false, 00:32:20.368 "compare": false, 00:32:20.368 "compare_and_write": false, 00:32:20.368 "abort": false, 00:32:20.368 "seek_hole": true, 00:32:20.368 "seek_data": true, 00:32:20.368 "copy": false, 00:32:20.368 "nvme_iov_md": false 00:32:20.368 }, 00:32:20.368 "driver_specific": { 00:32:20.368 "lvol": { 00:32:20.368 "lvol_store_uuid": "427e7d37-7acd-47df-8156-30d75c7c5066", 00:32:20.368 "base_bdev": "nvme0n1", 00:32:20.368 "thin_provision": true, 00:32:20.368 "num_allocated_clusters": 0, 00:32:20.368 "snapshot": false, 00:32:20.368 "clone": false, 00:32:20.368 "esnap_clone": false 00:32:20.368 } 00:32:20.368 } 00:32:20.368 } 00:32:20.368 ]' 00:32:20.368 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:32:20.368 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:32:20.368 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:32:20.368 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:32:20.368 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:32:20.368 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:32:20.368 18:59:49 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:32:20.368 18:59:49 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:32:20.627 18:59:49 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:32:20.886 18:59:49 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:32:20.886 18:59:49 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:32:20.886 18:59:49 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:20.886 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:20.886 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:32:20.886 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:32:20.886 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:32:20.886 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:21.145 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:32:21.145 { 00:32:21.145 "name": "42fa695a-4e38-4c49-9636-c4afb6a0806a", 00:32:21.145 "aliases": [ 00:32:21.145 "lvs/nvme0n1p0" 00:32:21.145 ], 00:32:21.145 "product_name": "Logical Volume", 00:32:21.145 "block_size": 4096, 00:32:21.145 "num_blocks": 26476544, 00:32:21.145 "uuid": "42fa695a-4e38-4c49-9636-c4afb6a0806a", 00:32:21.145 "assigned_rate_limits": { 00:32:21.145 "rw_ios_per_sec": 0, 00:32:21.145 "rw_mbytes_per_sec": 0, 00:32:21.145 "r_mbytes_per_sec": 0, 00:32:21.145 "w_mbytes_per_sec": 0 00:32:21.145 }, 00:32:21.145 "claimed": false, 00:32:21.145 "zoned": false, 00:32:21.145 "supported_io_types": { 00:32:21.145 "read": true, 00:32:21.145 "write": true, 00:32:21.145 "unmap": true, 00:32:21.145 "flush": false, 00:32:21.145 "reset": true, 00:32:21.145 "nvme_admin": false, 00:32:21.145 "nvme_io": false, 00:32:21.145 "nvme_io_md": false, 00:32:21.145 "write_zeroes": true, 00:32:21.145 "zcopy": false, 00:32:21.145 "get_zone_info": false, 00:32:21.145 "zone_management": false, 00:32:21.145 "zone_append": false, 00:32:21.145 "compare": false, 00:32:21.145 "compare_and_write": false, 00:32:21.145 "abort": false, 00:32:21.145 "seek_hole": true, 00:32:21.145 "seek_data": true, 00:32:21.145 "copy": false, 00:32:21.145 "nvme_iov_md": false 00:32:21.145 }, 00:32:21.145 "driver_specific": { 00:32:21.145 "lvol": { 00:32:21.145 "lvol_store_uuid": "427e7d37-7acd-47df-8156-30d75c7c5066", 00:32:21.145 "base_bdev": "nvme0n1", 00:32:21.145 "thin_provision": true, 00:32:21.145 "num_allocated_clusters": 0, 00:32:21.145 "snapshot": false, 00:32:21.145 "clone": false, 00:32:21.145 "esnap_clone": false 00:32:21.145 } 00:32:21.145 } 00:32:21.145 } 00:32:21.145 ]' 00:32:21.145 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:32:21.145 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:32:21.145 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:32:21.145 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:32:21.145 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:32:21.145 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:32:21.145 18:59:49 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:32:21.145 18:59:49 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:32:21.405 18:59:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:32:21.405 18:59:49 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:32:21.405 18:59:49 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:21.405 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:21.405 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:32:21.405 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:32:21.405 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:32:21.405 18:59:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42fa695a-4e38-4c49-9636-c4afb6a0806a 00:32:21.676 18:59:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:32:21.676 { 00:32:21.676 "name": "42fa695a-4e38-4c49-9636-c4afb6a0806a", 00:32:21.676 "aliases": [ 00:32:21.676 "lvs/nvme0n1p0" 00:32:21.676 ], 00:32:21.676 "product_name": "Logical Volume", 00:32:21.676 "block_size": 4096, 00:32:21.676 "num_blocks": 26476544, 00:32:21.676 "uuid": "42fa695a-4e38-4c49-9636-c4afb6a0806a", 00:32:21.676 "assigned_rate_limits": { 00:32:21.676 "rw_ios_per_sec": 0, 00:32:21.676 "rw_mbytes_per_sec": 0, 00:32:21.676 "r_mbytes_per_sec": 0, 00:32:21.676 "w_mbytes_per_sec": 0 00:32:21.676 }, 00:32:21.676 "claimed": false, 00:32:21.676 "zoned": false, 00:32:21.676 "supported_io_types": { 00:32:21.676 "read": true, 00:32:21.676 "write": true, 00:32:21.676 "unmap": true, 00:32:21.676 "flush": false, 00:32:21.676 "reset": true, 00:32:21.676 "nvme_admin": false, 00:32:21.676 "nvme_io": false, 00:32:21.676 "nvme_io_md": false, 00:32:21.676 "write_zeroes": true, 00:32:21.676 "zcopy": false, 00:32:21.676 "get_zone_info": false, 00:32:21.676 "zone_management": false, 00:32:21.676 "zone_append": false, 00:32:21.676 "compare": false, 00:32:21.676 "compare_and_write": false, 00:32:21.676 "abort": false, 00:32:21.676 "seek_hole": true, 00:32:21.676 "seek_data": true, 00:32:21.676 "copy": false, 00:32:21.676 "nvme_iov_md": false 00:32:21.676 }, 00:32:21.676 "driver_specific": { 00:32:21.676 "lvol": { 00:32:21.676 "lvol_store_uuid": "427e7d37-7acd-47df-8156-30d75c7c5066", 00:32:21.676 "base_bdev": "nvme0n1", 00:32:21.676 "thin_provision": true, 00:32:21.676 "num_allocated_clusters": 0, 00:32:21.676 "snapshot": false, 00:32:21.676 "clone": false, 00:32:21.676 "esnap_clone": false 00:32:21.676 } 00:32:21.676 } 00:32:21.676 } 00:32:21.676 ]' 00:32:21.676 18:59:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:32:21.676 18:59:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:32:21.676 18:59:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:32:21.676 18:59:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:32:21.676 18:59:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:32:21.676 18:59:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:32:21.676 18:59:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:32:21.676 18:59:50 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 42fa695a-4e38-4c49-9636-c4afb6a0806a -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:32:21.971 [2024-10-08 18:59:50.471180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.471236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:21.971 [2024-10-08 18:59:50.471258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:21.971 [2024-10-08 18:59:50.471278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.474739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.474784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:21.971 [2024-10-08 18:59:50.474800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 00:32:21.971 [2024-10-08 18:59:50.474812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.474944] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:21.971 [2024-10-08 18:59:50.476258] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:21.971 [2024-10-08 18:59:50.476428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.476456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:21.971 [2024-10-08 18:59:50.476481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:32:21.971 [2024-10-08 18:59:50.476504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.476640] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1e6e43eb-28c1-40da-a9ff-547ddd670846 00:32:21.971 [2024-10-08 18:59:50.478197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.478238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:32:21.971 [2024-10-08 18:59:50.478254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:32:21.971 [2024-10-08 18:59:50.478270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.486244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.486289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:21.971 [2024-10-08 18:59:50.486304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.882 ms 00:32:21.971 [2024-10-08 18:59:50.486318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.486489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.486507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:21.971 [2024-10-08 18:59:50.486520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:32:21.971 [2024-10-08 18:59:50.486537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.486583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.486597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:21.971 [2024-10-08 18:59:50.486608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:21.971 [2024-10-08 18:59:50.486621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.486660] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:21.971 [2024-10-08 18:59:50.491913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.491951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:21.971 [2024-10-08 18:59:50.491988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.256 ms 00:32:21.971 [2024-10-08 18:59:50.492000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.492075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.492088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:21.971 [2024-10-08 18:59:50.492102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:21.971 [2024-10-08 18:59:50.492132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.492170] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:32:21.971 [2024-10-08 18:59:50.492307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:21.971 [2024-10-08 18:59:50.492328] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:21.971 [2024-10-08 18:59:50.492369] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:21.971 [2024-10-08 18:59:50.492404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:21.971 [2024-10-08 18:59:50.492419] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:21.971 [2024-10-08 18:59:50.492435] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:21.971 [2024-10-08 18:59:50.492452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:21.971 [2024-10-08 18:59:50.492481] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:21.971 [2024-10-08 18:59:50.492502] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:21.971 [2024-10-08 18:59:50.492522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.492538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:21.971 [2024-10-08 18:59:50.492562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:32:21.971 [2024-10-08 18:59:50.492581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.492712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.971 [2024-10-08 18:59:50.492756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:21.971 [2024-10-08 18:59:50.492780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:21.971 [2024-10-08 18:59:50.492801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.971 [2024-10-08 18:59:50.493006] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:21.971 [2024-10-08 18:59:50.493040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:21.971 [2024-10-08 18:59:50.493072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:21.971 [2024-10-08 18:59:50.493096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:21.971 [2024-10-08 18:59:50.493134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:21.971 [2024-10-08 18:59:50.493157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:21.971 [2024-10-08 18:59:50.493184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:21.971 [2024-10-08 18:59:50.493207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:21.971 [2024-10-08 18:59:50.493234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:21.971 [2024-10-08 18:59:50.493256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:21.971 [2024-10-08 18:59:50.493282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:21.972 [2024-10-08 18:59:50.493300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:21.972 [2024-10-08 18:59:50.493322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:21.972 [2024-10-08 18:59:50.493338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:21.972 [2024-10-08 18:59:50.493359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:21.972 [2024-10-08 18:59:50.493376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:21.972 [2024-10-08 18:59:50.493421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:21.972 [2024-10-08 18:59:50.493446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:21.972 [2024-10-08 18:59:50.493490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:21.972 [2024-10-08 18:59:50.493533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:21.972 [2024-10-08 18:59:50.493553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:21.972 [2024-10-08 18:59:50.493592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:21.972 [2024-10-08 18:59:50.493609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:21.972 [2024-10-08 18:59:50.493639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:21.972 [2024-10-08 18:59:50.493656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:21.972 [2024-10-08 18:59:50.493696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:21.972 [2024-10-08 18:59:50.493724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:21.972 [2024-10-08 18:59:50.493768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:21.972 [2024-10-08 18:59:50.493787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:21.972 [2024-10-08 18:59:50.493807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:21.972 [2024-10-08 18:59:50.493821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:21.972 [2024-10-08 18:59:50.493838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:21.972 [2024-10-08 18:59:50.493852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:21.972 [2024-10-08 18:59:50.493882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:21.972 [2024-10-08 18:59:50.493904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:21.972 [2024-10-08 18:59:50.493919] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:21.972 [2024-10-08 18:59:50.493943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:21.972 [2024-10-08 18:59:50.493988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:21.972 [2024-10-08 18:59:50.494013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:21.972 [2024-10-08 18:59:50.494032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:21.972 [2024-10-08 18:59:50.494064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:21.972 [2024-10-08 18:59:50.494083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:21.972 [2024-10-08 18:59:50.494106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:21.972 [2024-10-08 18:59:50.494125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:21.972 [2024-10-08 18:59:50.494145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:21.972 [2024-10-08 18:59:50.494164] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:21.972 [2024-10-08 18:59:50.494189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:21.972 [2024-10-08 18:59:50.494214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:21.972 [2024-10-08 18:59:50.494241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:21.972 [2024-10-08 18:59:50.494263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:21.972 [2024-10-08 18:59:50.494288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:21.972 [2024-10-08 18:59:50.494310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:21.972 [2024-10-08 18:59:50.494334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:21.972 [2024-10-08 18:59:50.494352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:21.972 [2024-10-08 18:59:50.494370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:21.972 [2024-10-08 18:59:50.494385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:21.972 [2024-10-08 18:59:50.494409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:21.972 [2024-10-08 18:59:50.494430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:21.972 [2024-10-08 18:59:50.494466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:21.972 [2024-10-08 18:59:50.494487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:21.972 [2024-10-08 18:59:50.494511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:21.972 [2024-10-08 18:59:50.494532] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:21.972 [2024-10-08 18:59:50.494553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:21.972 [2024-10-08 18:59:50.494569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:21.972 [2024-10-08 18:59:50.494587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:21.972 [2024-10-08 18:59:50.494604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:21.972 [2024-10-08 18:59:50.494627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:21.972 [2024-10-08 18:59:50.494647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:21.972 [2024-10-08 18:59:50.494669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:21.972 [2024-10-08 18:59:50.494690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:32:21.972 [2024-10-08 18:59:50.494712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:21.972 [2024-10-08 18:59:50.494836] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:32:21.972 [2024-10-08 18:59:50.494867] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:32:24.508 [2024-10-08 18:59:52.997100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:52.997173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:32:24.508 [2024-10-08 18:59:52.997191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2502.248 ms 00:32:24.508 [2024-10-08 18:59:52.997205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.044350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.044421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:24.508 [2024-10-08 18:59:53.044441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.752 ms 00:32:24.508 [2024-10-08 18:59:53.044456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.044648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.044666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:24.508 [2024-10-08 18:59:53.044680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:24.508 [2024-10-08 18:59:53.044698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.096268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.096325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:24.508 [2024-10-08 18:59:53.096341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.529 ms 00:32:24.508 [2024-10-08 18:59:53.096356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.096455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.096473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:24.508 [2024-10-08 18:59:53.096488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:24.508 [2024-10-08 18:59:53.096501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.096946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.096982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:24.508 [2024-10-08 18:59:53.096995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:32:24.508 [2024-10-08 18:59:53.097008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.097124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.097138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:24.508 [2024-10-08 18:59:53.097149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:32:24.508 [2024-10-08 18:59:53.097167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.119823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.120092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:24.508 [2024-10-08 18:59:53.120133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.621 ms 00:32:24.508 [2024-10-08 18:59:53.120156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.132801] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:24.508 [2024-10-08 18:59:53.149563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.149626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:24.508 [2024-10-08 18:59:53.149647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.219 ms 00:32:24.508 [2024-10-08 18:59:53.149658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.232422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.232490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:32:24.508 [2024-10-08 18:59:53.232510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.619 ms 00:32:24.508 [2024-10-08 18:59:53.232522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.508 [2024-10-08 18:59:53.232773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.508 [2024-10-08 18:59:53.232797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:24.508 [2024-10-08 18:59:53.232818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:32:24.508 [2024-10-08 18:59:53.232829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.767 [2024-10-08 18:59:53.271025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.767 [2024-10-08 18:59:53.271071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:24.767 [2024-10-08 18:59:53.271089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.152 ms 00:32:24.767 [2024-10-08 18:59:53.271100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.767 [2024-10-08 18:59:53.308016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.767 [2024-10-08 18:59:53.308206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:24.767 [2024-10-08 18:59:53.308238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.817 ms 00:32:24.767 [2024-10-08 18:59:53.308249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.767 [2024-10-08 18:59:53.309176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.767 [2024-10-08 18:59:53.309238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:24.767 [2024-10-08 18:59:53.309266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:32:24.767 [2024-10-08 18:59:53.309282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.767 [2024-10-08 18:59:53.414623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.767 [2024-10-08 18:59:53.414687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:24.767 [2024-10-08 18:59:53.414712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.280 ms 00:32:24.767 [2024-10-08 18:59:53.414723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.767 [2024-10-08 18:59:53.454343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.767 [2024-10-08 18:59:53.454567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:24.767 [2024-10-08 18:59:53.454627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.488 ms 00:32:24.767 [2024-10-08 18:59:53.454641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:24.767 [2024-10-08 18:59:53.494146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:24.767 [2024-10-08 18:59:53.494211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:24.767 [2024-10-08 18:59:53.494230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.353 ms 00:32:24.767 [2024-10-08 18:59:53.494241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.026 [2024-10-08 18:59:53.532226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.026 [2024-10-08 18:59:53.532415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:25.026 [2024-10-08 18:59:53.532456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.887 ms 00:32:25.026 [2024-10-08 18:59:53.532472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.026 [2024-10-08 18:59:53.532646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.026 [2024-10-08 18:59:53.532667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:25.026 [2024-10-08 18:59:53.532689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:25.026 [2024-10-08 18:59:53.532719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.026 [2024-10-08 18:59:53.532815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:25.026 [2024-10-08 18:59:53.532829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:25.026 [2024-10-08 18:59:53.532849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:25.026 [2024-10-08 18:59:53.532862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:25.026 [2024-10-08 18:59:53.534115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:25.026 [2024-10-08 18:59:53.538839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3062.564 ms, result 0 00:32:25.026 [2024-10-08 18:59:53.539850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:25.026 { 00:32:25.026 "name": "ftl0", 00:32:25.026 "uuid": "1e6e43eb-28c1-40da-a9ff-547ddd670846" 00:32:25.026 } 00:32:25.026 18:59:53 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:32:25.026 18:59:53 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:32:25.026 18:59:53 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:25.026 18:59:53 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:32:25.026 18:59:53 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:25.026 18:59:53 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:25.026 18:59:53 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:25.285 18:59:53 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:32:25.285 [ 00:32:25.285 { 00:32:25.285 "name": "ftl0", 00:32:25.285 "aliases": [ 00:32:25.285 "1e6e43eb-28c1-40da-a9ff-547ddd670846" 00:32:25.285 ], 00:32:25.285 "product_name": "FTL disk", 00:32:25.285 "block_size": 4096, 00:32:25.285 "num_blocks": 23592960, 00:32:25.285 "uuid": "1e6e43eb-28c1-40da-a9ff-547ddd670846", 00:32:25.285 "assigned_rate_limits": { 00:32:25.285 "rw_ios_per_sec": 0, 00:32:25.285 "rw_mbytes_per_sec": 0, 00:32:25.285 "r_mbytes_per_sec": 0, 00:32:25.285 "w_mbytes_per_sec": 0 00:32:25.285 }, 00:32:25.285 "claimed": false, 00:32:25.285 "zoned": false, 00:32:25.285 "supported_io_types": { 00:32:25.285 "read": true, 00:32:25.285 "write": true, 00:32:25.285 "unmap": true, 00:32:25.285 "flush": true, 00:32:25.285 "reset": false, 00:32:25.285 "nvme_admin": false, 00:32:25.285 "nvme_io": false, 00:32:25.285 "nvme_io_md": false, 00:32:25.285 "write_zeroes": true, 00:32:25.285 "zcopy": false, 00:32:25.285 "get_zone_info": false, 00:32:25.285 "zone_management": false, 00:32:25.285 "zone_append": false, 00:32:25.285 "compare": false, 00:32:25.285 "compare_and_write": false, 00:32:25.285 "abort": false, 00:32:25.285 "seek_hole": false, 00:32:25.285 "seek_data": false, 00:32:25.285 "copy": false, 00:32:25.285 "nvme_iov_md": false 00:32:25.285 }, 00:32:25.285 "driver_specific": { 00:32:25.285 "ftl": { 00:32:25.285 "base_bdev": "42fa695a-4e38-4c49-9636-c4afb6a0806a", 00:32:25.285 "cache": "nvc0n1p0" 00:32:25.285 } 00:32:25.285 } 00:32:25.285 } 00:32:25.285 ] 00:32:25.285 18:59:54 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:32:25.285 18:59:54 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:32:25.285 18:59:54 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:25.545 18:59:54 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:32:25.545 18:59:54 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:32:25.804 18:59:54 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:32:25.804 { 00:32:25.804 "name": "ftl0", 00:32:25.804 "aliases": [ 00:32:25.804 "1e6e43eb-28c1-40da-a9ff-547ddd670846" 00:32:25.804 ], 00:32:25.804 "product_name": "FTL disk", 00:32:25.804 "block_size": 4096, 00:32:25.805 "num_blocks": 23592960, 00:32:25.805 "uuid": "1e6e43eb-28c1-40da-a9ff-547ddd670846", 00:32:25.805 "assigned_rate_limits": { 00:32:25.805 "rw_ios_per_sec": 0, 00:32:25.805 "rw_mbytes_per_sec": 0, 00:32:25.805 "r_mbytes_per_sec": 0, 00:32:25.805 "w_mbytes_per_sec": 0 00:32:25.805 }, 00:32:25.805 "claimed": false, 00:32:25.805 "zoned": false, 00:32:25.805 "supported_io_types": { 00:32:25.805 "read": true, 00:32:25.805 "write": true, 00:32:25.805 "unmap": true, 00:32:25.805 "flush": true, 00:32:25.805 "reset": false, 00:32:25.805 "nvme_admin": false, 00:32:25.805 "nvme_io": false, 00:32:25.805 "nvme_io_md": false, 00:32:25.805 "write_zeroes": true, 00:32:25.805 "zcopy": false, 00:32:25.805 "get_zone_info": false, 00:32:25.805 "zone_management": false, 00:32:25.805 "zone_append": false, 00:32:25.805 "compare": false, 00:32:25.805 "compare_and_write": false, 00:32:25.805 "abort": false, 00:32:25.805 "seek_hole": false, 00:32:25.805 "seek_data": false, 00:32:25.805 "copy": false, 00:32:25.805 "nvme_iov_md": false 00:32:25.805 }, 00:32:25.805 "driver_specific": { 00:32:25.805 "ftl": { 00:32:25.805 "base_bdev": "42fa695a-4e38-4c49-9636-c4afb6a0806a", 00:32:25.805 "cache": "nvc0n1p0" 00:32:25.805 } 00:32:25.805 } 00:32:25.805 } 00:32:25.805 ]' 00:32:25.805 18:59:54 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:32:26.064 18:59:54 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:32:26.064 18:59:54 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:26.064 [2024-10-08 18:59:54.770528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.064 [2024-10-08 18:59:54.770798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:26.064 [2024-10-08 18:59:54.770844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:26.064 [2024-10-08 18:59:54.770861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.064 [2024-10-08 18:59:54.770930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:26.064 [2024-10-08 18:59:54.775884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.064 [2024-10-08 18:59:54.775921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:26.064 [2024-10-08 18:59:54.775944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.925 ms 00:32:26.064 [2024-10-08 18:59:54.775964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.064 [2024-10-08 18:59:54.776753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.064 [2024-10-08 18:59:54.776776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:26.064 [2024-10-08 18:59:54.776797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:32:26.064 [2024-10-08 18:59:54.776809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.064 [2024-10-08 18:59:54.780180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.064 [2024-10-08 18:59:54.780208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:26.064 [2024-10-08 18:59:54.780225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:32:26.064 [2024-10-08 18:59:54.780237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.064 [2024-10-08 18:59:54.786842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.064 [2024-10-08 18:59:54.786881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:26.064 [2024-10-08 18:59:54.786900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.522 ms 00:32:26.064 [2024-10-08 18:59:54.786912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.324 [2024-10-08 18:59:54.828648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.324 [2024-10-08 18:59:54.828694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:26.324 [2024-10-08 18:59:54.828716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.597 ms 00:32:26.324 [2024-10-08 18:59:54.828727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.324 [2024-10-08 18:59:54.852703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.324 [2024-10-08 18:59:54.852764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:26.324 [2024-10-08 18:59:54.852784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.874 ms 00:32:26.324 [2024-10-08 18:59:54.852796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.324 [2024-10-08 18:59:54.853079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.324 [2024-10-08 18:59:54.853094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:26.324 [2024-10-08 18:59:54.853109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:32:26.324 [2024-10-08 18:59:54.853120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.324 [2024-10-08 18:59:54.893540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.324 [2024-10-08 18:59:54.893585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:26.324 [2024-10-08 18:59:54.893604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.371 ms 00:32:26.324 [2024-10-08 18:59:54.893616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.324 [2024-10-08 18:59:54.934943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.324 [2024-10-08 18:59:54.934994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:26.324 [2024-10-08 18:59:54.935017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.207 ms 00:32:26.324 [2024-10-08 18:59:54.935028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.324 [2024-10-08 18:59:54.976067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.324 [2024-10-08 18:59:54.976115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:26.324 [2024-10-08 18:59:54.976136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.935 ms 00:32:26.324 [2024-10-08 18:59:54.976148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.324 [2024-10-08 18:59:55.017515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.324 [2024-10-08 18:59:55.017559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:26.324 [2024-10-08 18:59:55.017578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.168 ms 00:32:26.324 [2024-10-08 18:59:55.017588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.324 [2024-10-08 18:59:55.017688] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:26.324 [2024-10-08 18:59:55.017708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:26.324 [2024-10-08 18:59:55.017725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:26.324 [2024-10-08 18:59:55.017736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:26.324 [2024-10-08 18:59:55.017750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:26.324 [2024-10-08 18:59:55.017761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.017983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:26.325 [2024-10-08 18:59:55.018984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.018999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:26.326 [2024-10-08 18:59:55.019128] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:26.326 [2024-10-08 18:59:55.019149] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e6e43eb-28c1-40da-a9ff-547ddd670846 00:32:26.326 [2024-10-08 18:59:55.019161] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:26.326 [2024-10-08 18:59:55.019175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:26.326 [2024-10-08 18:59:55.019186] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:26.326 [2024-10-08 18:59:55.019201] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:26.326 [2024-10-08 18:59:55.019212] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:26.326 [2024-10-08 18:59:55.019226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:26.326 [2024-10-08 18:59:55.019237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:26.326 [2024-10-08 18:59:55.019250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:26.326 [2024-10-08 18:59:55.019260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:26.326 [2024-10-08 18:59:55.019274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.326 [2024-10-08 18:59:55.019286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:26.326 [2024-10-08 18:59:55.019301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.588 ms 00:32:26.326 [2024-10-08 18:59:55.019312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.326 [2024-10-08 18:59:55.042216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.326 [2024-10-08 18:59:55.042259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:26.326 [2024-10-08 18:59:55.042281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.845 ms 00:32:26.326 [2024-10-08 18:59:55.042293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.326 [2024-10-08 18:59:55.042933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:26.326 [2024-10-08 18:59:55.042972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:26.326 [2024-10-08 18:59:55.042992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:32:26.326 [2024-10-08 18:59:55.043003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.585 [2024-10-08 18:59:55.119365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.585 [2024-10-08 18:59:55.119411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:26.585 [2024-10-08 18:59:55.119429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.585 [2024-10-08 18:59:55.119448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.585 [2024-10-08 18:59:55.119610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.585 [2024-10-08 18:59:55.119626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:26.585 [2024-10-08 18:59:55.119644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.585 [2024-10-08 18:59:55.119656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.585 [2024-10-08 18:59:55.119760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.585 [2024-10-08 18:59:55.119775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:26.585 [2024-10-08 18:59:55.119793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.585 [2024-10-08 18:59:55.119804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.585 [2024-10-08 18:59:55.119848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.585 [2024-10-08 18:59:55.119874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:26.585 [2024-10-08 18:59:55.119889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.585 [2024-10-08 18:59:55.119904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.585 [2024-10-08 18:59:55.262437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.585 [2024-10-08 18:59:55.262721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:26.585 [2024-10-08 18:59:55.262753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.585 [2024-10-08 18:59:55.262766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.845 [2024-10-08 18:59:55.366907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.845 [2024-10-08 18:59:55.366982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:26.845 [2024-10-08 18:59:55.367000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.845 [2024-10-08 18:59:55.367014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.845 [2024-10-08 18:59:55.367157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.845 [2024-10-08 18:59:55.367170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:26.845 [2024-10-08 18:59:55.367187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.845 [2024-10-08 18:59:55.367197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.845 [2024-10-08 18:59:55.367273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.845 [2024-10-08 18:59:55.367284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:26.845 [2024-10-08 18:59:55.367315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.845 [2024-10-08 18:59:55.367326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.845 [2024-10-08 18:59:55.367489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.845 [2024-10-08 18:59:55.367503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:26.845 [2024-10-08 18:59:55.367516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.845 [2024-10-08 18:59:55.367527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.845 [2024-10-08 18:59:55.367598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.845 [2024-10-08 18:59:55.367611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:26.845 [2024-10-08 18:59:55.367624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.845 [2024-10-08 18:59:55.367635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.845 [2024-10-08 18:59:55.367707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.845 [2024-10-08 18:59:55.367718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:26.845 [2024-10-08 18:59:55.367734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.845 [2024-10-08 18:59:55.367745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.845 [2024-10-08 18:59:55.367813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:26.845 [2024-10-08 18:59:55.367830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:26.845 [2024-10-08 18:59:55.367847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:26.845 [2024-10-08 18:59:55.367857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:26.845 [2024-10-08 18:59:55.368100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 597.563 ms, result 0 00:32:26.845 true 00:32:26.845 18:59:55 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76558 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76558 ']' 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76558 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76558 00:32:26.845 killing process with pid 76558 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76558' 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76558 00:32:26.845 18:59:55 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76558 00:32:33.415 19:00:00 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:32:33.415 65536+0 records in 00:32:33.415 65536+0 records out 00:32:33.415 268435456 bytes (268 MB, 256 MiB) copied, 1.06317 s, 252 MB/s 00:32:33.415 19:00:02 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:33.673 [2024-10-08 19:00:02.173358] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:33.673 [2024-10-08 19:00:02.173776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76767 ] 00:32:33.673 [2024-10-08 19:00:02.365001] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.930 [2024-10-08 19:00:02.674820] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.497 [2024-10-08 19:00:03.027052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:34.497 [2024-10-08 19:00:03.027119] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:34.497 [2024-10-08 19:00:03.191355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.191599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:34.497 [2024-10-08 19:00:03.191639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:34.497 [2024-10-08 19:00:03.191652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.195393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.195444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:34.497 [2024-10-08 19:00:03.195470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.707 ms 00:32:34.497 [2024-10-08 19:00:03.195497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.195622] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:34.497 [2024-10-08 19:00:03.196700] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:34.497 [2024-10-08 19:00:03.196735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.196752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:34.497 [2024-10-08 19:00:03.196764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:32:34.497 [2024-10-08 19:00:03.196775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.198402] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:34.497 [2024-10-08 19:00:03.220739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.220782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:34.497 [2024-10-08 19:00:03.220799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.336 ms 00:32:34.497 [2024-10-08 19:00:03.220811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.220926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.220942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:34.497 [2024-10-08 19:00:03.220983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:34.497 [2024-10-08 19:00:03.220996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.227899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.227935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:34.497 [2024-10-08 19:00:03.227949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.844 ms 00:32:34.497 [2024-10-08 19:00:03.227975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.228104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.228124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:34.497 [2024-10-08 19:00:03.228137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:32:34.497 [2024-10-08 19:00:03.228148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.228182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.228193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:34.497 [2024-10-08 19:00:03.228205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:34.497 [2024-10-08 19:00:03.228216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.228242] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:34.497 [2024-10-08 19:00:03.233100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.233137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:34.497 [2024-10-08 19:00:03.233150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.865 ms 00:32:34.497 [2024-10-08 19:00:03.233160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.233251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.497 [2024-10-08 19:00:03.233269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:34.497 [2024-10-08 19:00:03.233281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:34.497 [2024-10-08 19:00:03.233292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.497 [2024-10-08 19:00:03.233318] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:34.497 [2024-10-08 19:00:03.233342] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:34.497 [2024-10-08 19:00:03.233393] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:34.497 [2024-10-08 19:00:03.233412] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:34.497 [2024-10-08 19:00:03.233507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:34.497 [2024-10-08 19:00:03.233521] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:34.497 [2024-10-08 19:00:03.233534] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:34.497 [2024-10-08 19:00:03.233548] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:34.498 [2024-10-08 19:00:03.233560] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:34.498 [2024-10-08 19:00:03.233572] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:34.498 [2024-10-08 19:00:03.233583] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:34.498 [2024-10-08 19:00:03.233593] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:34.498 [2024-10-08 19:00:03.233603] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:34.498 [2024-10-08 19:00:03.233613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.498 [2024-10-08 19:00:03.233624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:34.498 [2024-10-08 19:00:03.233638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:32:34.498 [2024-10-08 19:00:03.233648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.498 [2024-10-08 19:00:03.233726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.498 [2024-10-08 19:00:03.233737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:34.498 [2024-10-08 19:00:03.233747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:34.498 [2024-10-08 19:00:03.233757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.498 [2024-10-08 19:00:03.233870] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:34.498 [2024-10-08 19:00:03.233883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:34.498 [2024-10-08 19:00:03.233895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:34.498 [2024-10-08 19:00:03.233910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.498 [2024-10-08 19:00:03.233922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:34.498 [2024-10-08 19:00:03.233932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:34.498 [2024-10-08 19:00:03.233942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:34.498 [2024-10-08 19:00:03.233952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:34.498 [2024-10-08 19:00:03.233962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:34.498 [2024-10-08 19:00:03.233972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:34.498 [2024-10-08 19:00:03.233982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:34.498 [2024-10-08 19:00:03.234019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:34.498 [2024-10-08 19:00:03.234029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:34.498 [2024-10-08 19:00:03.234039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:34.498 [2024-10-08 19:00:03.234049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:34.498 [2024-10-08 19:00:03.234060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:34.498 [2024-10-08 19:00:03.234080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:34.498 [2024-10-08 19:00:03.234090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:34.498 [2024-10-08 19:00:03.234110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.498 [2024-10-08 19:00:03.234130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:34.498 [2024-10-08 19:00:03.234140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.498 [2024-10-08 19:00:03.234162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:34.498 [2024-10-08 19:00:03.234182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.498 [2024-10-08 19:00:03.234200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:34.498 [2024-10-08 19:00:03.234209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:34.498 [2024-10-08 19:00:03.234227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:34.498 [2024-10-08 19:00:03.234236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:34.498 [2024-10-08 19:00:03.234254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:34.498 [2024-10-08 19:00:03.234263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:34.498 [2024-10-08 19:00:03.234272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:34.498 [2024-10-08 19:00:03.234281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:34.498 [2024-10-08 19:00:03.234290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:34.498 [2024-10-08 19:00:03.234298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:34.498 [2024-10-08 19:00:03.234316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:34.498 [2024-10-08 19:00:03.234326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234334] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:34.498 [2024-10-08 19:00:03.234344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:34.498 [2024-10-08 19:00:03.234354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:34.498 [2024-10-08 19:00:03.234364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:34.498 [2024-10-08 19:00:03.234374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:34.498 [2024-10-08 19:00:03.234383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:34.498 [2024-10-08 19:00:03.234392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:34.498 [2024-10-08 19:00:03.234401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:34.498 [2024-10-08 19:00:03.234410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:34.498 [2024-10-08 19:00:03.234419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:34.498 [2024-10-08 19:00:03.234430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:34.498 [2024-10-08 19:00:03.234442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:34.498 [2024-10-08 19:00:03.234457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:34.498 [2024-10-08 19:00:03.234468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:34.498 [2024-10-08 19:00:03.234483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:34.498 [2024-10-08 19:00:03.234495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:34.498 [2024-10-08 19:00:03.234505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:34.498 [2024-10-08 19:00:03.234515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:34.498 [2024-10-08 19:00:03.234526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:34.498 [2024-10-08 19:00:03.234536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:34.498 [2024-10-08 19:00:03.234546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:34.498 [2024-10-08 19:00:03.234570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:34.498 [2024-10-08 19:00:03.234581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:34.498 [2024-10-08 19:00:03.234592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:34.498 [2024-10-08 19:00:03.234602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:34.498 [2024-10-08 19:00:03.234613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:34.498 [2024-10-08 19:00:03.234623] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:34.498 [2024-10-08 19:00:03.234634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:34.498 [2024-10-08 19:00:03.234646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:34.498 [2024-10-08 19:00:03.234656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:34.498 [2024-10-08 19:00:03.234666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:34.498 [2024-10-08 19:00:03.234677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:34.498 [2024-10-08 19:00:03.234687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.498 [2024-10-08 19:00:03.234700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:34.498 [2024-10-08 19:00:03.234710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:32:34.498 [2024-10-08 19:00:03.234722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.756 [2024-10-08 19:00:03.280623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.756 [2024-10-08 19:00:03.280896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:34.756 [2024-10-08 19:00:03.280924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.841 ms 00:32:34.756 [2024-10-08 19:00:03.280936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.756 [2024-10-08 19:00:03.281130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.281145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:34.757 [2024-10-08 19:00:03.281157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:34.757 [2024-10-08 19:00:03.281167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.330038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.330095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:34.757 [2024-10-08 19:00:03.330111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.845 ms 00:32:34.757 [2024-10-08 19:00:03.330122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.330260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.330275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:34.757 [2024-10-08 19:00:03.330286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:34.757 [2024-10-08 19:00:03.330296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.330737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.330756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:34.757 [2024-10-08 19:00:03.330767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:32:34.757 [2024-10-08 19:00:03.330778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.330898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.330912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:34.757 [2024-10-08 19:00:03.330923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:32:34.757 [2024-10-08 19:00:03.330933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.350791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.350857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:34.757 [2024-10-08 19:00:03.350873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.833 ms 00:32:34.757 [2024-10-08 19:00:03.350900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.371776] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:34.757 [2024-10-08 19:00:03.371826] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:34.757 [2024-10-08 19:00:03.371847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.371858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:34.757 [2024-10-08 19:00:03.371871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.756 ms 00:32:34.757 [2024-10-08 19:00:03.371881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.402900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.402951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:34.757 [2024-10-08 19:00:03.402981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.899 ms 00:32:34.757 [2024-10-08 19:00:03.402998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.424627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.424819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:34.757 [2024-10-08 19:00:03.424843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.530 ms 00:32:34.757 [2024-10-08 19:00:03.424854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.446128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.446178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:34.757 [2024-10-08 19:00:03.446195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.180 ms 00:32:34.757 [2024-10-08 19:00:03.446205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:34.757 [2024-10-08 19:00:03.447026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:34.757 [2024-10-08 19:00:03.447050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:34.757 [2024-10-08 19:00:03.447062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:32:34.757 [2024-10-08 19:00:03.447072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.015 [2024-10-08 19:00:03.551289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.015 [2024-10-08 19:00:03.551367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:35.015 [2024-10-08 19:00:03.551386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.181 ms 00:32:35.015 [2024-10-08 19:00:03.551398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.015 [2024-10-08 19:00:03.565952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:35.015 [2024-10-08 19:00:03.583115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.015 [2024-10-08 19:00:03.583448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:35.015 [2024-10-08 19:00:03.583478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.490 ms 00:32:35.015 [2024-10-08 19:00:03.583490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.015 [2024-10-08 19:00:03.583620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.015 [2024-10-08 19:00:03.583634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:35.015 [2024-10-08 19:00:03.583646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:35.015 [2024-10-08 19:00:03.583656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.015 [2024-10-08 19:00:03.583715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.015 [2024-10-08 19:00:03.583731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:35.015 [2024-10-08 19:00:03.583746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:35.015 [2024-10-08 19:00:03.583756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.016 [2024-10-08 19:00:03.583779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.016 [2024-10-08 19:00:03.583791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:35.016 [2024-10-08 19:00:03.583801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:35.016 [2024-10-08 19:00:03.583811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.016 [2024-10-08 19:00:03.583847] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:35.016 [2024-10-08 19:00:03.583872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.016 [2024-10-08 19:00:03.583883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:35.016 [2024-10-08 19:00:03.583893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:35.016 [2024-10-08 19:00:03.583906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.016 [2024-10-08 19:00:03.621710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.016 [2024-10-08 19:00:03.621880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:35.016 [2024-10-08 19:00:03.621904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.782 ms 00:32:35.016 [2024-10-08 19:00:03.621916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.016 [2024-10-08 19:00:03.622093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:35.016 [2024-10-08 19:00:03.622110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:35.016 [2024-10-08 19:00:03.622125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:35.016 [2024-10-08 19:00:03.622136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:35.016 [2024-10-08 19:00:03.623087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:35.016 [2024-10-08 19:00:03.627622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 431.391 ms, result 0 00:32:35.016 [2024-10-08 19:00:03.628455] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:35.016 [2024-10-08 19:00:03.647042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:35.951  [2024-10-08T19:00:06.084Z] Copying: 30/256 [MB] (30 MBps) [2024-10-08T19:00:07.017Z] Copying: 59/256 [MB] (29 MBps) [2024-10-08T19:00:07.954Z] Copying: 89/256 [MB] (29 MBps) [2024-10-08T19:00:08.899Z] Copying: 118/256 [MB] (29 MBps) [2024-10-08T19:00:09.833Z] Copying: 147/256 [MB] (29 MBps) [2024-10-08T19:00:10.768Z] Copying: 176/256 [MB] (29 MBps) [2024-10-08T19:00:11.706Z] Copying: 206/256 [MB] (29 MBps) [2024-10-08T19:00:12.675Z] Copying: 235/256 [MB] (29 MBps) [2024-10-08T19:00:12.675Z] Copying: 256/256 [MB] (average 29 MBps)[2024-10-08 19:00:12.351403] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:43.918 [2024-10-08 19:00:12.368797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.368850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:43.918 [2024-10-08 19:00:12.368868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:43.918 [2024-10-08 19:00:12.368881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.368909] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:43.918 [2024-10-08 19:00:12.374073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.374246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:43.918 [2024-10-08 19:00:12.374272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.145 ms 00:32:43.918 [2024-10-08 19:00:12.374285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.376742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.376777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:43.918 [2024-10-08 19:00:12.376792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:32:43.918 [2024-10-08 19:00:12.376811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.383159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.383190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:43.918 [2024-10-08 19:00:12.383203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.325 ms 00:32:43.918 [2024-10-08 19:00:12.383214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.389115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.389146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:43.918 [2024-10-08 19:00:12.389158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.865 ms 00:32:43.918 [2024-10-08 19:00:12.389174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.426580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.426619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:43.918 [2024-10-08 19:00:12.426633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.341 ms 00:32:43.918 [2024-10-08 19:00:12.426644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.450861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.450901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:43.918 [2024-10-08 19:00:12.450915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.159 ms 00:32:43.918 [2024-10-08 19:00:12.450926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.451074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.451088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:43.918 [2024-10-08 19:00:12.451099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:32:43.918 [2024-10-08 19:00:12.451110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.490950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.491007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:43.918 [2024-10-08 19:00:12.491021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.820 ms 00:32:43.918 [2024-10-08 19:00:12.491031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.528785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.528819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:43.918 [2024-10-08 19:00:12.528833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.694 ms 00:32:43.918 [2024-10-08 19:00:12.528843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.565410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.565443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:43.918 [2024-10-08 19:00:12.565456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.509 ms 00:32:43.918 [2024-10-08 19:00:12.565466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.603121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.918 [2024-10-08 19:00:12.603156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:43.918 [2024-10-08 19:00:12.603169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.569 ms 00:32:43.918 [2024-10-08 19:00:12.603179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.918 [2024-10-08 19:00:12.603237] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:43.918 [2024-10-08 19:00:12.603254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:43.918 [2024-10-08 19:00:12.603266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:43.918 [2024-10-08 19:00:12.603278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:43.918 [2024-10-08 19:00:12.603289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:43.918 [2024-10-08 19:00:12.603301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:43.918 [2024-10-08 19:00:12.603311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.603994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:43.919 [2024-10-08 19:00:12.604389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:43.920 [2024-10-08 19:00:12.604400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:43.920 [2024-10-08 19:00:12.604431] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:43.920 [2024-10-08 19:00:12.604442] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e6e43eb-28c1-40da-a9ff-547ddd670846 00:32:43.920 [2024-10-08 19:00:12.604453] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:43.920 [2024-10-08 19:00:12.604463] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:43.920 [2024-10-08 19:00:12.604473] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:43.920 [2024-10-08 19:00:12.604483] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:43.920 [2024-10-08 19:00:12.604496] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:43.920 [2024-10-08 19:00:12.604506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:43.920 [2024-10-08 19:00:12.604516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:43.920 [2024-10-08 19:00:12.604525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:43.920 [2024-10-08 19:00:12.604534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:43.920 [2024-10-08 19:00:12.604544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.920 [2024-10-08 19:00:12.604554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:43.920 [2024-10-08 19:00:12.604565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:32:43.920 [2024-10-08 19:00:12.604575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.920 [2024-10-08 19:00:12.625086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.920 [2024-10-08 19:00:12.625116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:43.920 [2024-10-08 19:00:12.625135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.490 ms 00:32:43.920 [2024-10-08 19:00:12.625145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.920 [2024-10-08 19:00:12.625717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.920 [2024-10-08 19:00:12.625732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:43.920 [2024-10-08 19:00:12.625743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:32:43.920 [2024-10-08 19:00:12.625754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.675009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.675048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:44.199 [2024-10-08 19:00:12.675061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.675072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.675153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.675165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:44.199 [2024-10-08 19:00:12.675180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.675190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.675238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.675252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:44.199 [2024-10-08 19:00:12.675267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.675277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.675296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.675307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:44.199 [2024-10-08 19:00:12.675318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.675328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.808220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.808267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:44.199 [2024-10-08 19:00:12.808289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.808299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.912730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.912783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:44.199 [2024-10-08 19:00:12.912798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.912810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.912909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.912922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:44.199 [2024-10-08 19:00:12.912933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.912943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.913004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.913015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:44.199 [2024-10-08 19:00:12.913026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.913036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.913150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.913164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:44.199 [2024-10-08 19:00:12.913175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.913185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.913226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.199 [2024-10-08 19:00:12.913239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:44.199 [2024-10-08 19:00:12.913249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.199 [2024-10-08 19:00:12.913259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.199 [2024-10-08 19:00:12.913299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.200 [2024-10-08 19:00:12.913310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:44.200 [2024-10-08 19:00:12.913320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.200 [2024-10-08 19:00:12.913331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.200 [2024-10-08 19:00:12.913379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.200 [2024-10-08 19:00:12.913391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:44.200 [2024-10-08 19:00:12.913402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.200 [2024-10-08 19:00:12.913412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.200 [2024-10-08 19:00:12.913558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.777 ms, result 0 00:32:46.101 00:32:46.101 00:32:46.101 19:00:14 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76892 00:32:46.101 19:00:14 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:32:46.101 19:00:14 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76892 00:32:46.101 19:00:14 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 76892 ']' 00:32:46.101 19:00:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.101 19:00:14 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:46.101 19:00:14 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.101 19:00:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:46.101 19:00:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:32:46.101 [2024-10-08 19:00:14.466792] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:46.101 [2024-10-08 19:00:14.466923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76892 ] 00:32:46.101 [2024-10-08 19:00:14.631152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.101 [2024-10-08 19:00:14.837952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.034 19:00:15 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.034 19:00:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:32:47.034 19:00:15 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:32:47.292 [2024-10-08 19:00:15.954050] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:47.292 [2024-10-08 19:00:15.954112] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:47.551 [2024-10-08 19:00:16.117524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.117590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:47.551 [2024-10-08 19:00:16.117612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:47.551 [2024-10-08 19:00:16.117624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.551 [2024-10-08 19:00:16.121732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.121777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:47.551 [2024-10-08 19:00:16.121796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.077 ms 00:32:47.551 [2024-10-08 19:00:16.121806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.551 [2024-10-08 19:00:16.121936] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:47.551 [2024-10-08 19:00:16.123042] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:47.551 [2024-10-08 19:00:16.123079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.123091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:47.551 [2024-10-08 19:00:16.123105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:32:47.551 [2024-10-08 19:00:16.123115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.551 [2024-10-08 19:00:16.124677] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:47.551 [2024-10-08 19:00:16.145059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.145105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:47.551 [2024-10-08 19:00:16.145121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.386 ms 00:32:47.551 [2024-10-08 19:00:16.145134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.551 [2024-10-08 19:00:16.145248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.145268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:47.551 [2024-10-08 19:00:16.145279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:47.551 [2024-10-08 19:00:16.145292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.551 [2024-10-08 19:00:16.152365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.152405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:47.551 [2024-10-08 19:00:16.152419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.021 ms 00:32:47.551 [2024-10-08 19:00:16.152450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.551 [2024-10-08 19:00:16.152618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.152640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:47.551 [2024-10-08 19:00:16.152653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:32:47.551 [2024-10-08 19:00:16.152669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.551 [2024-10-08 19:00:16.152702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.152722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:47.551 [2024-10-08 19:00:16.152734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:47.551 [2024-10-08 19:00:16.152750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.551 [2024-10-08 19:00:16.152780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:47.551 [2024-10-08 19:00:16.157894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.551 [2024-10-08 19:00:16.157924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:47.551 [2024-10-08 19:00:16.157942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.114 ms 00:32:47.551 [2024-10-08 19:00:16.157966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.552 [2024-10-08 19:00:16.158051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.552 [2024-10-08 19:00:16.158064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:47.552 [2024-10-08 19:00:16.158080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:47.552 [2024-10-08 19:00:16.158090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.552 [2024-10-08 19:00:16.158119] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:47.552 [2024-10-08 19:00:16.158144] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:47.552 [2024-10-08 19:00:16.158197] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:47.552 [2024-10-08 19:00:16.158224] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:47.552 [2024-10-08 19:00:16.158323] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:47.552 [2024-10-08 19:00:16.158337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:47.552 [2024-10-08 19:00:16.158358] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:47.552 [2024-10-08 19:00:16.158371] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:47.552 [2024-10-08 19:00:16.158389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:47.552 [2024-10-08 19:00:16.158400] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:47.552 [2024-10-08 19:00:16.158416] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:47.552 [2024-10-08 19:00:16.158426] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:47.552 [2024-10-08 19:00:16.158447] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:47.552 [2024-10-08 19:00:16.158463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.552 [2024-10-08 19:00:16.158478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:47.552 [2024-10-08 19:00:16.158489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:32:47.552 [2024-10-08 19:00:16.158504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.552 [2024-10-08 19:00:16.158582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.552 [2024-10-08 19:00:16.158603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:47.552 [2024-10-08 19:00:16.158613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:47.552 [2024-10-08 19:00:16.158628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.552 [2024-10-08 19:00:16.158721] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:47.552 [2024-10-08 19:00:16.158744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:47.552 [2024-10-08 19:00:16.158755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:47.552 [2024-10-08 19:00:16.158770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.552 [2024-10-08 19:00:16.158781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:47.552 [2024-10-08 19:00:16.158798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:47.552 [2024-10-08 19:00:16.158808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:47.552 [2024-10-08 19:00:16.158828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:47.552 [2024-10-08 19:00:16.158838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:47.552 [2024-10-08 19:00:16.158852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:47.552 [2024-10-08 19:00:16.158862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:47.552 [2024-10-08 19:00:16.158876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:47.552 [2024-10-08 19:00:16.158886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:47.552 [2024-10-08 19:00:16.158900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:47.552 [2024-10-08 19:00:16.158910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:47.552 [2024-10-08 19:00:16.158924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.552 [2024-10-08 19:00:16.158934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:47.552 [2024-10-08 19:00:16.158949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:47.552 [2024-10-08 19:00:16.158981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.552 [2024-10-08 19:00:16.158996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:47.552 [2024-10-08 19:00:16.159006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:47.552 [2024-10-08 19:00:16.159038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:47.552 [2024-10-08 19:00:16.159049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:47.552 [2024-10-08 19:00:16.159069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:47.552 [2024-10-08 19:00:16.159080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:47.552 [2024-10-08 19:00:16.159096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:47.552 [2024-10-08 19:00:16.159107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:47.552 [2024-10-08 19:00:16.159122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:47.552 [2024-10-08 19:00:16.159133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:47.552 [2024-10-08 19:00:16.159148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:47.552 [2024-10-08 19:00:16.159159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:47.552 [2024-10-08 19:00:16.159175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:47.552 [2024-10-08 19:00:16.159185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:47.552 [2024-10-08 19:00:16.159203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:47.552 [2024-10-08 19:00:16.159214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:47.552 [2024-10-08 19:00:16.159230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:47.552 [2024-10-08 19:00:16.159240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:47.552 [2024-10-08 19:00:16.159257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:47.552 [2024-10-08 19:00:16.159268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:47.552 [2024-10-08 19:00:16.159289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.552 [2024-10-08 19:00:16.159300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:47.552 [2024-10-08 19:00:16.159315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:47.552 [2024-10-08 19:00:16.159326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.552 [2024-10-08 19:00:16.159347] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:47.552 [2024-10-08 19:00:16.159359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:47.552 [2024-10-08 19:00:16.159375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:47.552 [2024-10-08 19:00:16.159387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.552 [2024-10-08 19:00:16.159403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:47.552 [2024-10-08 19:00:16.159414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:47.552 [2024-10-08 19:00:16.159430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:47.552 [2024-10-08 19:00:16.159450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:47.552 [2024-10-08 19:00:16.159465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:47.552 [2024-10-08 19:00:16.159476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:47.552 [2024-10-08 19:00:16.159494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:47.552 [2024-10-08 19:00:16.159509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:47.552 [2024-10-08 19:00:16.159532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:47.552 [2024-10-08 19:00:16.159544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:47.552 [2024-10-08 19:00:16.159561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:47.552 [2024-10-08 19:00:16.159573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:47.552 [2024-10-08 19:00:16.159592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:47.552 [2024-10-08 19:00:16.159604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:47.552 [2024-10-08 19:00:16.159620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:47.552 [2024-10-08 19:00:16.159632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:47.552 [2024-10-08 19:00:16.159649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:47.552 [2024-10-08 19:00:16.159661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:47.552 [2024-10-08 19:00:16.159677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:47.552 [2024-10-08 19:00:16.159689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:47.552 [2024-10-08 19:00:16.159705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:47.552 [2024-10-08 19:00:16.159717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:47.552 [2024-10-08 19:00:16.159734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:47.552 [2024-10-08 19:00:16.159747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:47.552 [2024-10-08 19:00:16.159775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:47.552 [2024-10-08 19:00:16.159787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:47.552 [2024-10-08 19:00:16.159804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:47.552 [2024-10-08 19:00:16.159817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:47.552 [2024-10-08 19:00:16.159835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.552 [2024-10-08 19:00:16.159847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:47.553 [2024-10-08 19:00:16.159863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:32:47.553 [2024-10-08 19:00:16.159875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.553 [2024-10-08 19:00:16.201768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.553 [2024-10-08 19:00:16.201814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:47.553 [2024-10-08 19:00:16.201835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.816 ms 00:32:47.553 [2024-10-08 19:00:16.201846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.553 [2024-10-08 19:00:16.202015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.553 [2024-10-08 19:00:16.202028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:47.553 [2024-10-08 19:00:16.202045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:32:47.553 [2024-10-08 19:00:16.202056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.553 [2024-10-08 19:00:16.263675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.553 [2024-10-08 19:00:16.263716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:47.553 [2024-10-08 19:00:16.263738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.583 ms 00:32:47.553 [2024-10-08 19:00:16.263749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.553 [2024-10-08 19:00:16.263881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.553 [2024-10-08 19:00:16.263895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:47.553 [2024-10-08 19:00:16.263912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:47.553 [2024-10-08 19:00:16.263928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.553 [2024-10-08 19:00:16.264378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.553 [2024-10-08 19:00:16.264397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:47.553 [2024-10-08 19:00:16.264414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:32:47.553 [2024-10-08 19:00:16.264424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.553 [2024-10-08 19:00:16.264554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.553 [2024-10-08 19:00:16.264570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:47.553 [2024-10-08 19:00:16.264586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:47.553 [2024-10-08 19:00:16.264596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.553 [2024-10-08 19:00:16.287001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.553 [2024-10-08 19:00:16.287040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:47.553 [2024-10-08 19:00:16.287060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.365 ms 00:32:47.553 [2024-10-08 19:00:16.287076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.307261] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:32:47.812 [2024-10-08 19:00:16.307298] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:47.812 [2024-10-08 19:00:16.307319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.307330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:47.812 [2024-10-08 19:00:16.307348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.104 ms 00:32:47.812 [2024-10-08 19:00:16.307358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.338228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.338272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:47.812 [2024-10-08 19:00:16.338291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.766 ms 00:32:47.812 [2024-10-08 19:00:16.338314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.356915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.356972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:47.812 [2024-10-08 19:00:16.356997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.506 ms 00:32:47.812 [2024-10-08 19:00:16.357008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.375874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.375908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:47.812 [2024-10-08 19:00:16.375926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.777 ms 00:32:47.812 [2024-10-08 19:00:16.375936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.376811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.376840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:47.812 [2024-10-08 19:00:16.376855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:32:47.812 [2024-10-08 19:00:16.376869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.469769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.469832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:47.812 [2024-10-08 19:00:16.469855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.862 ms 00:32:47.812 [2024-10-08 19:00:16.469872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.481345] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:47.812 [2024-10-08 19:00:16.497867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.497943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:47.812 [2024-10-08 19:00:16.497966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.873 ms 00:32:47.812 [2024-10-08 19:00:16.497983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.498126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.498145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:47.812 [2024-10-08 19:00:16.498157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:47.812 [2024-10-08 19:00:16.498171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.498235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.498252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:47.812 [2024-10-08 19:00:16.498263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:47.812 [2024-10-08 19:00:16.498279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.498304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.498321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:47.812 [2024-10-08 19:00:16.498331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:47.812 [2024-10-08 19:00:16.498357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.498396] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:47.812 [2024-10-08 19:00:16.498416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.498427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:47.812 [2024-10-08 19:00:16.498440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:47.812 [2024-10-08 19:00:16.498449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.536072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.536111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:47.812 [2024-10-08 19:00:16.536128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.593 ms 00:32:47.812 [2024-10-08 19:00:16.536139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.536260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.812 [2024-10-08 19:00:16.536273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:47.812 [2024-10-08 19:00:16.536287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:47.812 [2024-10-08 19:00:16.536297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.812 [2024-10-08 19:00:16.537476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:47.812 [2024-10-08 19:00:16.542032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 419.550 ms, result 0 00:32:47.812 [2024-10-08 19:00:16.543418] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:47.812 Some configs were skipped because the RPC state that can call them passed over. 00:32:48.071 19:00:16 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:32:48.071 [2024-10-08 19:00:16.819976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.071 [2024-10-08 19:00:16.820037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:48.071 [2024-10-08 19:00:16.820059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.401 ms 00:32:48.071 [2024-10-08 19:00:16.820075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.071 [2024-10-08 19:00:16.820115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.545 ms, result 0 00:32:48.071 true 00:32:48.330 19:00:16 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:32:48.330 [2024-10-08 19:00:17.007751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:48.330 [2024-10-08 19:00:17.007804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:32:48.330 [2024-10-08 19:00:17.007824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:32:48.330 [2024-10-08 19:00:17.007837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:48.330 [2024-10-08 19:00:17.007884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.137 ms, result 0 00:32:48.330 true 00:32:48.330 19:00:17 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76892 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 76892 ']' 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 76892 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76892 00:32:48.330 killing process with pid 76892 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76892' 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 76892 00:32:48.330 19:00:17 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 76892 00:32:49.714 [2024-10-08 19:00:18.228478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.714 [2024-10-08 19:00:18.228542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:49.714 [2024-10-08 19:00:18.228558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:49.714 [2024-10-08 19:00:18.228571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.714 [2024-10-08 19:00:18.228610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:32:49.714 [2024-10-08 19:00:18.232994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.714 [2024-10-08 19:00:18.233027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:49.714 [2024-10-08 19:00:18.233044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:32:49.714 [2024-10-08 19:00:18.233055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.714 [2024-10-08 19:00:18.233308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.714 [2024-10-08 19:00:18.233321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:49.714 [2024-10-08 19:00:18.233335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:32:49.714 [2024-10-08 19:00:18.233347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.714 [2024-10-08 19:00:18.236550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.714 [2024-10-08 19:00:18.236585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:49.714 [2024-10-08 19:00:18.236599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:32:49.714 [2024-10-08 19:00:18.236610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.714 [2024-10-08 19:00:18.242382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.714 [2024-10-08 19:00:18.242414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:49.714 [2024-10-08 19:00:18.242431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.736 ms 00:32:49.714 [2024-10-08 19:00:18.242444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.714 [2024-10-08 19:00:18.257491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.714 [2024-10-08 19:00:18.257532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:49.714 [2024-10-08 19:00:18.257551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.987 ms 00:32:49.715 [2024-10-08 19:00:18.257561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.715 [2024-10-08 19:00:18.268022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.715 [2024-10-08 19:00:18.268051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:49.715 [2024-10-08 19:00:18.268066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.405 ms 00:32:49.715 [2024-10-08 19:00:18.268088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.715 [2024-10-08 19:00:18.268219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.715 [2024-10-08 19:00:18.268233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:49.715 [2024-10-08 19:00:18.268246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:49.715 [2024-10-08 19:00:18.268259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.715 [2024-10-08 19:00:18.283806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.715 [2024-10-08 19:00:18.283837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:49.715 [2024-10-08 19:00:18.283856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.518 ms 00:32:49.715 [2024-10-08 19:00:18.283865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.715 [2024-10-08 19:00:18.298873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.715 [2024-10-08 19:00:18.298904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:49.715 [2024-10-08 19:00:18.298930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.948 ms 00:32:49.715 [2024-10-08 19:00:18.298940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.715 [2024-10-08 19:00:18.313507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.715 [2024-10-08 19:00:18.313537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:49.715 [2024-10-08 19:00:18.313556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.500 ms 00:32:49.715 [2024-10-08 19:00:18.313565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.715 [2024-10-08 19:00:18.328636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.715 [2024-10-08 19:00:18.328678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:49.715 [2024-10-08 19:00:18.328697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.986 ms 00:32:49.715 [2024-10-08 19:00:18.328707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.715 [2024-10-08 19:00:18.328763] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:49.715 [2024-10-08 19:00:18.328786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.328997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:49.715 [2024-10-08 19:00:18.329409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.329988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:49.716 [2024-10-08 19:00:18.330173] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:49.716 [2024-10-08 19:00:18.330188] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e6e43eb-28c1-40da-a9ff-547ddd670846 00:32:49.716 [2024-10-08 19:00:18.330199] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:49.716 [2024-10-08 19:00:18.330212] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:49.716 [2024-10-08 19:00:18.330221] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:49.716 [2024-10-08 19:00:18.330234] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:49.716 [2024-10-08 19:00:18.330253] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:49.716 [2024-10-08 19:00:18.330267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:49.716 [2024-10-08 19:00:18.330280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:49.716 [2024-10-08 19:00:18.330291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:49.716 [2024-10-08 19:00:18.330300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:49.717 [2024-10-08 19:00:18.330313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.717 [2024-10-08 19:00:18.330323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:49.717 [2024-10-08 19:00:18.330338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:32:49.717 [2024-10-08 19:00:18.330349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.717 [2024-10-08 19:00:18.351085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.717 [2024-10-08 19:00:18.351116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:49.717 [2024-10-08 19:00:18.351140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.710 ms 00:32:49.717 [2024-10-08 19:00:18.351151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.717 [2024-10-08 19:00:18.351759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.717 [2024-10-08 19:00:18.351779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:49.717 [2024-10-08 19:00:18.351797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:32:49.717 [2024-10-08 19:00:18.351808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.717 [2024-10-08 19:00:18.416447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.717 [2024-10-08 19:00:18.416481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:49.717 [2024-10-08 19:00:18.416499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.717 [2024-10-08 19:00:18.416516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.717 [2024-10-08 19:00:18.416611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.717 [2024-10-08 19:00:18.416623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:49.717 [2024-10-08 19:00:18.416639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.717 [2024-10-08 19:00:18.416649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.717 [2024-10-08 19:00:18.416706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.717 [2024-10-08 19:00:18.416719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:49.717 [2024-10-08 19:00:18.416740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.717 [2024-10-08 19:00:18.416750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.717 [2024-10-08 19:00:18.416780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.717 [2024-10-08 19:00:18.416791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:49.717 [2024-10-08 19:00:18.416806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.717 [2024-10-08 19:00:18.416816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.977 [2024-10-08 19:00:18.549102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.977 [2024-10-08 19:00:18.549152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:49.977 [2024-10-08 19:00:18.549173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.977 [2024-10-08 19:00:18.549185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.977 [2024-10-08 19:00:18.663608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.977 [2024-10-08 19:00:18.663666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:49.977 [2024-10-08 19:00:18.663688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.977 [2024-10-08 19:00:18.663702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.977 [2024-10-08 19:00:18.663828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.977 [2024-10-08 19:00:18.663842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:49.977 [2024-10-08 19:00:18.663867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.977 [2024-10-08 19:00:18.663879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.977 [2024-10-08 19:00:18.663919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.977 [2024-10-08 19:00:18.663938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:49.977 [2024-10-08 19:00:18.663981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.977 [2024-10-08 19:00:18.663994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.977 [2024-10-08 19:00:18.664138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.977 [2024-10-08 19:00:18.664158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:49.977 [2024-10-08 19:00:18.664176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.977 [2024-10-08 19:00:18.664188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.977 [2024-10-08 19:00:18.664243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.977 [2024-10-08 19:00:18.664257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:49.978 [2024-10-08 19:00:18.664282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.978 [2024-10-08 19:00:18.664294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.978 [2024-10-08 19:00:18.664343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.978 [2024-10-08 19:00:18.664357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:49.978 [2024-10-08 19:00:18.664379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.978 [2024-10-08 19:00:18.664392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.978 [2024-10-08 19:00:18.664449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:49.978 [2024-10-08 19:00:18.664469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:49.978 [2024-10-08 19:00:18.664487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:49.978 [2024-10-08 19:00:18.664499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.978 [2024-10-08 19:00:18.664660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.153 ms, result 0 00:32:51.355 19:00:19 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:32:51.355 19:00:19 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:51.355 [2024-10-08 19:00:20.010952] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:32:51.355 [2024-10-08 19:00:20.011143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76961 ] 00:32:51.613 [2024-10-08 19:00:20.191739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.873 [2024-10-08 19:00:20.393763] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.131 [2024-10-08 19:00:20.765310] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:52.131 [2024-10-08 19:00:20.765384] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:52.392 [2024-10-08 19:00:20.929405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.929457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:52.392 [2024-10-08 19:00:20.929477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:52.392 [2024-10-08 19:00:20.929505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.932925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.932979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:52.392 [2024-10-08 19:00:20.932993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.397 ms 00:32:52.392 [2024-10-08 19:00:20.933003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.933115] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:52.392 [2024-10-08 19:00:20.934115] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:52.392 [2024-10-08 19:00:20.934152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.934169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:52.392 [2024-10-08 19:00:20.934181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:32:52.392 [2024-10-08 19:00:20.934192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.936012] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:52.392 [2024-10-08 19:00:20.956347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.956391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:52.392 [2024-10-08 19:00:20.956407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.335 ms 00:32:52.392 [2024-10-08 19:00:20.956419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.956613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.956642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:52.392 [2024-10-08 19:00:20.956659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:52.392 [2024-10-08 19:00:20.956670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.963373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.963404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:52.392 [2024-10-08 19:00:20.963416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.656 ms 00:32:52.392 [2024-10-08 19:00:20.963427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.963535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.963554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:52.392 [2024-10-08 19:00:20.963565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:32:52.392 [2024-10-08 19:00:20.963576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.963608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.963618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:52.392 [2024-10-08 19:00:20.963629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:52.392 [2024-10-08 19:00:20.963639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.963682] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:32:52.392 [2024-10-08 19:00:20.968676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.968712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:52.392 [2024-10-08 19:00:20.968724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.002 ms 00:32:52.392 [2024-10-08 19:00:20.968735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.968824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.968842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:52.392 [2024-10-08 19:00:20.968854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:52.392 [2024-10-08 19:00:20.968865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.968890] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:52.392 [2024-10-08 19:00:20.968913] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:52.392 [2024-10-08 19:00:20.968951] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:52.392 [2024-10-08 19:00:20.968986] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:52.392 [2024-10-08 19:00:20.969089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:52.392 [2024-10-08 19:00:20.969109] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:52.392 [2024-10-08 19:00:20.969123] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:52.392 [2024-10-08 19:00:20.969138] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969151] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969163] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:32:52.392 [2024-10-08 19:00:20.969174] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:52.392 [2024-10-08 19:00:20.969185] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:52.392 [2024-10-08 19:00:20.969196] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:52.392 [2024-10-08 19:00:20.969207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.969219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:52.392 [2024-10-08 19:00:20.969234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:32:52.392 [2024-10-08 19:00:20.969245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.969330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.392 [2024-10-08 19:00:20.969342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:52.392 [2024-10-08 19:00:20.969353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:52.392 [2024-10-08 19:00:20.969363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.392 [2024-10-08 19:00:20.969461] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:52.392 [2024-10-08 19:00:20.969474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:52.392 [2024-10-08 19:00:20.969486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:52.392 [2024-10-08 19:00:20.969523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:52.392 [2024-10-08 19:00:20.969555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:52.392 [2024-10-08 19:00:20.969576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:52.392 [2024-10-08 19:00:20.969599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:32:52.392 [2024-10-08 19:00:20.969610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:52.392 [2024-10-08 19:00:20.969620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:52.392 [2024-10-08 19:00:20.969631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:32:52.392 [2024-10-08 19:00:20.969642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:52.392 [2024-10-08 19:00:20.969663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:52.392 [2024-10-08 19:00:20.969695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:52.392 [2024-10-08 19:00:20.969725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:52.392 [2024-10-08 19:00:20.969755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:52.392 [2024-10-08 19:00:20.969785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.392 [2024-10-08 19:00:20.969805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:52.392 [2024-10-08 19:00:20.969815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:32:52.392 [2024-10-08 19:00:20.969825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:52.392 [2024-10-08 19:00:20.969835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:52.393 [2024-10-08 19:00:20.969845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:32:52.393 [2024-10-08 19:00:20.969855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:52.393 [2024-10-08 19:00:20.969865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:52.393 [2024-10-08 19:00:20.969875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:32:52.393 [2024-10-08 19:00:20.969885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.393 [2024-10-08 19:00:20.969895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:52.393 [2024-10-08 19:00:20.969905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:32:52.393 [2024-10-08 19:00:20.969916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.393 [2024-10-08 19:00:20.969926] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:52.393 [2024-10-08 19:00:20.969937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:52.393 [2024-10-08 19:00:20.969947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:52.393 [2024-10-08 19:00:20.969970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.393 [2024-10-08 19:00:20.969981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:52.393 [2024-10-08 19:00:20.969992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:52.393 [2024-10-08 19:00:20.970003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:52.393 [2024-10-08 19:00:20.970013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:52.393 [2024-10-08 19:00:20.970023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:52.393 [2024-10-08 19:00:20.970033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:52.393 [2024-10-08 19:00:20.970045] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:52.393 [2024-10-08 19:00:20.970058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:52.393 [2024-10-08 19:00:20.970076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:32:52.393 [2024-10-08 19:00:20.970087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:32:52.393 [2024-10-08 19:00:20.970099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:32:52.393 [2024-10-08 19:00:20.970110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:32:52.393 [2024-10-08 19:00:20.970121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:32:52.393 [2024-10-08 19:00:20.970133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:32:52.393 [2024-10-08 19:00:20.970144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:32:52.393 [2024-10-08 19:00:20.970155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:32:52.393 [2024-10-08 19:00:20.970167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:32:52.393 [2024-10-08 19:00:20.970178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:32:52.393 [2024-10-08 19:00:20.970189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:32:52.393 [2024-10-08 19:00:20.970201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:32:52.393 [2024-10-08 19:00:20.970212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:32:52.393 [2024-10-08 19:00:20.970234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:32:52.393 [2024-10-08 19:00:20.970244] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:52.393 [2024-10-08 19:00:20.970256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:52.393 [2024-10-08 19:00:20.970267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:52.393 [2024-10-08 19:00:20.970278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:52.393 [2024-10-08 19:00:20.970288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:52.393 [2024-10-08 19:00:20.970298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:52.393 [2024-10-08 19:00:20.970309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:20.970323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:52.393 [2024-10-08 19:00:20.970334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:32:52.393 [2024-10-08 19:00:20.970343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.393 [2024-10-08 19:00:21.030500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:21.030555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:52.393 [2024-10-08 19:00:21.030570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.099 ms 00:32:52.393 [2024-10-08 19:00:21.030598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.393 [2024-10-08 19:00:21.030780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:21.030795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:52.393 [2024-10-08 19:00:21.030807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:52.393 [2024-10-08 19:00:21.030818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.393 [2024-10-08 19:00:21.079694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:21.079740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:52.393 [2024-10-08 19:00:21.079755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.848 ms 00:32:52.393 [2024-10-08 19:00:21.079766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.393 [2024-10-08 19:00:21.079859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:21.079871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:52.393 [2024-10-08 19:00:21.079882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:52.393 [2024-10-08 19:00:21.079893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.393 [2024-10-08 19:00:21.080340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:21.080360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:52.393 [2024-10-08 19:00:21.080372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:32:52.393 [2024-10-08 19:00:21.080382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.393 [2024-10-08 19:00:21.080505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:21.080520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:52.393 [2024-10-08 19:00:21.080530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:32:52.393 [2024-10-08 19:00:21.080541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.393 [2024-10-08 19:00:21.099486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:21.099527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:52.393 [2024-10-08 19:00:21.099542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.920 ms 00:32:52.393 [2024-10-08 19:00:21.099553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.393 [2024-10-08 19:00:21.119631] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:32:52.393 [2024-10-08 19:00:21.119682] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:52.393 [2024-10-08 19:00:21.119701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.393 [2024-10-08 19:00:21.119713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:52.393 [2024-10-08 19:00:21.119726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.013 ms 00:32:52.393 [2024-10-08 19:00:21.119737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.653 [2024-10-08 19:00:21.151036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.653 [2024-10-08 19:00:21.151097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:52.653 [2024-10-08 19:00:21.151119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.201 ms 00:32:52.653 [2024-10-08 19:00:21.151130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.653 [2024-10-08 19:00:21.170304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.653 [2024-10-08 19:00:21.170346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:52.653 [2024-10-08 19:00:21.170360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.060 ms 00:32:52.653 [2024-10-08 19:00:21.170371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.653 [2024-10-08 19:00:21.189343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.653 [2024-10-08 19:00:21.189383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:52.653 [2024-10-08 19:00:21.189398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.885 ms 00:32:52.653 [2024-10-08 19:00:21.189408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.653 [2024-10-08 19:00:21.190301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.653 [2024-10-08 19:00:21.190331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:52.653 [2024-10-08 19:00:21.190344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:32:52.653 [2024-10-08 19:00:21.190356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.653 [2024-10-08 19:00:21.279835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.653 [2024-10-08 19:00:21.279925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:52.653 [2024-10-08 19:00:21.279943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.447 ms 00:32:52.653 [2024-10-08 19:00:21.279962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.654 [2024-10-08 19:00:21.291570] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:32:52.654 [2024-10-08 19:00:21.308270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.654 [2024-10-08 19:00:21.308337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:52.654 [2024-10-08 19:00:21.308354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.143 ms 00:32:52.654 [2024-10-08 19:00:21.308365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.654 [2024-10-08 19:00:21.308491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.654 [2024-10-08 19:00:21.308506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:52.654 [2024-10-08 19:00:21.308518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:52.654 [2024-10-08 19:00:21.308528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.654 [2024-10-08 19:00:21.308591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.654 [2024-10-08 19:00:21.308606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:52.654 [2024-10-08 19:00:21.308617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:52.654 [2024-10-08 19:00:21.308627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.654 [2024-10-08 19:00:21.308657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.654 [2024-10-08 19:00:21.308668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:52.654 [2024-10-08 19:00:21.308679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:52.654 [2024-10-08 19:00:21.308689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.654 [2024-10-08 19:00:21.308722] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:52.654 [2024-10-08 19:00:21.308735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.654 [2024-10-08 19:00:21.308745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:52.654 [2024-10-08 19:00:21.308758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:52.654 [2024-10-08 19:00:21.308769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.654 [2024-10-08 19:00:21.346557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.654 [2024-10-08 19:00:21.346617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:52.654 [2024-10-08 19:00:21.346632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.762 ms 00:32:52.654 [2024-10-08 19:00:21.346643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.654 [2024-10-08 19:00:21.346759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.654 [2024-10-08 19:00:21.346776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:52.654 [2024-10-08 19:00:21.346787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:52.654 [2024-10-08 19:00:21.346797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.654 [2024-10-08 19:00:21.347881] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:52.654 [2024-10-08 19:00:21.352303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 418.046 ms, result 0 00:32:52.654 [2024-10-08 19:00:21.353218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:52.654 [2024-10-08 19:00:21.372090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:54.029  [2024-10-08T19:00:23.721Z] Copying: 33/256 [MB] (33 MBps) [2024-10-08T19:00:24.655Z] Copying: 64/256 [MB] (30 MBps) [2024-10-08T19:00:25.589Z] Copying: 95/256 [MB] (30 MBps) [2024-10-08T19:00:26.523Z] Copying: 125/256 [MB] (30 MBps) [2024-10-08T19:00:27.458Z] Copying: 156/256 [MB] (30 MBps) [2024-10-08T19:00:28.397Z] Copying: 185/256 [MB] (29 MBps) [2024-10-08T19:00:29.774Z] Copying: 215/256 [MB] (29 MBps) [2024-10-08T19:00:29.774Z] Copying: 245/256 [MB] (30 MBps) [2024-10-08T19:00:29.774Z] Copying: 256/256 [MB] (average 30 MBps)[2024-10-08 19:00:29.719731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:01.017 [2024-10-08 19:00:29.734787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.017 [2024-10-08 19:00:29.734833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:01.017 [2024-10-08 19:00:29.734849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:01.017 [2024-10-08 19:00:29.734860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.017 [2024-10-08 19:00:29.734884] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:33:01.017 [2024-10-08 19:00:29.739200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.017 [2024-10-08 19:00:29.739231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:01.017 [2024-10-08 19:00:29.739244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:33:01.017 [2024-10-08 19:00:29.739254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.017 [2024-10-08 19:00:29.739489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.017 [2024-10-08 19:00:29.739510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:01.017 [2024-10-08 19:00:29.739522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:33:01.017 [2024-10-08 19:00:29.739532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.017 [2024-10-08 19:00:29.742497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.017 [2024-10-08 19:00:29.742519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:01.017 [2024-10-08 19:00:29.742530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.949 ms 00:33:01.017 [2024-10-08 19:00:29.742541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.017 [2024-10-08 19:00:29.748361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.017 [2024-10-08 19:00:29.748394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:01.017 [2024-10-08 19:00:29.748416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.802 ms 00:33:01.017 [2024-10-08 19:00:29.748427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.276 [2024-10-08 19:00:29.785269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.276 [2024-10-08 19:00:29.785310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:01.276 [2024-10-08 19:00:29.785323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.772 ms 00:33:01.276 [2024-10-08 19:00:29.785334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.276 [2024-10-08 19:00:29.806908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.276 [2024-10-08 19:00:29.806951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:01.276 [2024-10-08 19:00:29.806972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.498 ms 00:33:01.276 [2024-10-08 19:00:29.806983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.276 [2024-10-08 19:00:29.807119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.276 [2024-10-08 19:00:29.807134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:01.276 [2024-10-08 19:00:29.807145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:33:01.276 [2024-10-08 19:00:29.807155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.276 [2024-10-08 19:00:29.844017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.276 [2024-10-08 19:00:29.844057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:01.276 [2024-10-08 19:00:29.844072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.831 ms 00:33:01.276 [2024-10-08 19:00:29.844082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.276 [2024-10-08 19:00:29.880720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.276 [2024-10-08 19:00:29.880762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:01.276 [2024-10-08 19:00:29.880776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.580 ms 00:33:01.276 [2024-10-08 19:00:29.880785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.276 [2024-10-08 19:00:29.917211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.276 [2024-10-08 19:00:29.917252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:01.276 [2024-10-08 19:00:29.917266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.365 ms 00:33:01.276 [2024-10-08 19:00:29.917276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.276 [2024-10-08 19:00:29.955770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.276 [2024-10-08 19:00:29.955814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:01.276 [2024-10-08 19:00:29.955828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.406 ms 00:33:01.276 [2024-10-08 19:00:29.955838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.276 [2024-10-08 19:00:29.955894] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:01.276 [2024-10-08 19:00:29.955911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.955924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.955935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.955947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.955967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.955979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.955991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:01.276 [2024-10-08 19:00:29.956231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.956991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.957002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.957014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.957024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.957035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.957065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:01.277 [2024-10-08 19:00:29.957084] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:01.277 [2024-10-08 19:00:29.957094] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e6e43eb-28c1-40da-a9ff-547ddd670846 00:33:01.277 [2024-10-08 19:00:29.957105] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:01.277 [2024-10-08 19:00:29.957115] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:01.277 [2024-10-08 19:00:29.957125] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:01.277 [2024-10-08 19:00:29.957160] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:01.277 [2024-10-08 19:00:29.957170] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:01.277 [2024-10-08 19:00:29.957181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:01.277 [2024-10-08 19:00:29.957192] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:01.277 [2024-10-08 19:00:29.957202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:01.277 [2024-10-08 19:00:29.957212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:01.277 [2024-10-08 19:00:29.957223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.277 [2024-10-08 19:00:29.957234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:01.277 [2024-10-08 19:00:29.957246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:33:01.277 [2024-10-08 19:00:29.957270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.277 [2024-10-08 19:00:29.978074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.277 [2024-10-08 19:00:29.978123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:01.277 [2024-10-08 19:00:29.978136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.783 ms 00:33:01.277 [2024-10-08 19:00:29.978146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.277 [2024-10-08 19:00:29.978737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.277 [2024-10-08 19:00:29.978760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:01.277 [2024-10-08 19:00:29.978771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:33:01.277 [2024-10-08 19:00:29.978781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.535 [2024-10-08 19:00:30.030497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.535 [2024-10-08 19:00:30.030544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:01.535 [2024-10-08 19:00:30.030558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.535 [2024-10-08 19:00:30.030569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.535 [2024-10-08 19:00:30.030678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.535 [2024-10-08 19:00:30.030692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:01.535 [2024-10-08 19:00:30.030703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.535 [2024-10-08 19:00:30.030713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.535 [2024-10-08 19:00:30.030772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.535 [2024-10-08 19:00:30.030791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:01.535 [2024-10-08 19:00:30.030802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.030812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.030832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.030843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:01.536 [2024-10-08 19:00:30.030853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.030863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.158540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.158617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:01.536 [2024-10-08 19:00:30.158634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.158644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.265590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.265642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:01.536 [2024-10-08 19:00:30.265657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.265685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.265792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.265807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:01.536 [2024-10-08 19:00:30.265819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.265835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.265866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.265878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:01.536 [2024-10-08 19:00:30.265889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.265900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.266047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.266063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:01.536 [2024-10-08 19:00:30.266074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.266090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.266130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.266144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:01.536 [2024-10-08 19:00:30.266156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.266167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.266207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.266219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:01.536 [2024-10-08 19:00:30.266230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.266242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.266292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.536 [2024-10-08 19:00:30.266305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:01.536 [2024-10-08 19:00:30.266316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.536 [2024-10-08 19:00:30.266327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.536 [2024-10-08 19:00:30.266477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.671 ms, result 0 00:33:02.912 00:33:02.912 00:33:02.912 19:00:31 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:33:02.912 19:00:31 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:33:03.478 19:00:32 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:03.478 [2024-10-08 19:00:32.168006] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:33:03.478 [2024-10-08 19:00:32.168184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77088 ] 00:33:03.736 [2024-10-08 19:00:32.344767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.994 [2024-10-08 19:00:32.560408] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.252 [2024-10-08 19:00:32.932579] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:04.252 [2024-10-08 19:00:32.932654] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:04.512 [2024-10-08 19:00:33.095485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.095547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:04.512 [2024-10-08 19:00:33.095566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:04.512 [2024-10-08 19:00:33.095577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.098803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.098844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:04.512 [2024-10-08 19:00:33.098858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:33:04.512 [2024-10-08 19:00:33.098869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.098984] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:04.512 [2024-10-08 19:00:33.100000] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:04.512 [2024-10-08 19:00:33.100035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.100051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:04.512 [2024-10-08 19:00:33.100064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:33:04.512 [2024-10-08 19:00:33.100075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.101678] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:04.512 [2024-10-08 19:00:33.122341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.122383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:04.512 [2024-10-08 19:00:33.122399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.663 ms 00:33:04.512 [2024-10-08 19:00:33.122411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.122519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.122534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:04.512 [2024-10-08 19:00:33.122550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:04.512 [2024-10-08 19:00:33.122560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.129510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.129550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:04.512 [2024-10-08 19:00:33.129564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.906 ms 00:33:04.512 [2024-10-08 19:00:33.129591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.129700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.129719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:04.512 [2024-10-08 19:00:33.129730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:33:04.512 [2024-10-08 19:00:33.129741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.129773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.129784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:04.512 [2024-10-08 19:00:33.129794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:04.512 [2024-10-08 19:00:33.129804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.129830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:33:04.512 [2024-10-08 19:00:33.134686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.134723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:04.512 [2024-10-08 19:00:33.134736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.864 ms 00:33:04.512 [2024-10-08 19:00:33.134746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.134819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.512 [2024-10-08 19:00:33.134837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:04.512 [2024-10-08 19:00:33.134849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:04.512 [2024-10-08 19:00:33.134860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.512 [2024-10-08 19:00:33.134884] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:04.513 [2024-10-08 19:00:33.134906] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:04.513 [2024-10-08 19:00:33.134945] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:04.513 [2024-10-08 19:00:33.134977] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:04.513 [2024-10-08 19:00:33.135073] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:04.513 [2024-10-08 19:00:33.135087] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:04.513 [2024-10-08 19:00:33.135100] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:04.513 [2024-10-08 19:00:33.135114] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135126] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135137] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:33:04.513 [2024-10-08 19:00:33.135148] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:04.513 [2024-10-08 19:00:33.135158] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:04.513 [2024-10-08 19:00:33.135168] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:04.513 [2024-10-08 19:00:33.135179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.513 [2024-10-08 19:00:33.135189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:04.513 [2024-10-08 19:00:33.135203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:33:04.513 [2024-10-08 19:00:33.135214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.513 [2024-10-08 19:00:33.135293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.513 [2024-10-08 19:00:33.135309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:04.513 [2024-10-08 19:00:33.135320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:04.513 [2024-10-08 19:00:33.135330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.513 [2024-10-08 19:00:33.135421] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:04.513 [2024-10-08 19:00:33.135433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:04.513 [2024-10-08 19:00:33.135453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:04.513 [2024-10-08 19:00:33.135488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:04.513 [2024-10-08 19:00:33.135516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:04.513 [2024-10-08 19:00:33.135536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:04.513 [2024-10-08 19:00:33.135557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:33:04.513 [2024-10-08 19:00:33.135567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:04.513 [2024-10-08 19:00:33.135577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:04.513 [2024-10-08 19:00:33.135587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:33:04.513 [2024-10-08 19:00:33.135596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:04.513 [2024-10-08 19:00:33.135615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:04.513 [2024-10-08 19:00:33.135644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:04.513 [2024-10-08 19:00:33.135672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:04.513 [2024-10-08 19:00:33.135700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:04.513 [2024-10-08 19:00:33.135728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:04.513 [2024-10-08 19:00:33.135756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:04.513 [2024-10-08 19:00:33.135773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:04.513 [2024-10-08 19:00:33.135783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:33:04.513 [2024-10-08 19:00:33.135791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:04.513 [2024-10-08 19:00:33.135800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:04.513 [2024-10-08 19:00:33.135810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:33:04.513 [2024-10-08 19:00:33.135818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:04.513 [2024-10-08 19:00:33.135837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:33:04.513 [2024-10-08 19:00:33.135847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135856] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:04.513 [2024-10-08 19:00:33.135866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:04.513 [2024-10-08 19:00:33.135876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.513 [2024-10-08 19:00:33.135896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:04.513 [2024-10-08 19:00:33.135906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:04.513 [2024-10-08 19:00:33.135915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:04.513 [2024-10-08 19:00:33.135924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:04.513 [2024-10-08 19:00:33.135934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:04.513 [2024-10-08 19:00:33.135943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:04.513 [2024-10-08 19:00:33.135965] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:04.513 [2024-10-08 19:00:33.135978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:04.513 [2024-10-08 19:00:33.135995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:33:04.513 [2024-10-08 19:00:33.136005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:33:04.513 [2024-10-08 19:00:33.136016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:33:04.513 [2024-10-08 19:00:33.136026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:33:04.513 [2024-10-08 19:00:33.136037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:33:04.513 [2024-10-08 19:00:33.136048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:33:04.513 [2024-10-08 19:00:33.136059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:33:04.513 [2024-10-08 19:00:33.136069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:33:04.513 [2024-10-08 19:00:33.136080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:33:04.513 [2024-10-08 19:00:33.136090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:33:04.513 [2024-10-08 19:00:33.136101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:33:04.513 [2024-10-08 19:00:33.136111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:33:04.513 [2024-10-08 19:00:33.136121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:33:04.513 [2024-10-08 19:00:33.136132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:33:04.513 [2024-10-08 19:00:33.136142] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:04.513 [2024-10-08 19:00:33.136153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:04.513 [2024-10-08 19:00:33.136164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:04.513 [2024-10-08 19:00:33.136175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:04.513 [2024-10-08 19:00:33.136185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:04.513 [2024-10-08 19:00:33.136198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:04.513 [2024-10-08 19:00:33.136209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.513 [2024-10-08 19:00:33.136223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:04.513 [2024-10-08 19:00:33.136233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:33:04.513 [2024-10-08 19:00:33.136244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.513 [2024-10-08 19:00:33.184389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.513 [2024-10-08 19:00:33.184451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:04.513 [2024-10-08 19:00:33.184468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.072 ms 00:33:04.513 [2024-10-08 19:00:33.184480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.514 [2024-10-08 19:00:33.184644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.514 [2024-10-08 19:00:33.184657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:04.514 [2024-10-08 19:00:33.184670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:04.514 [2024-10-08 19:00:33.184680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.514 [2024-10-08 19:00:33.232636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.514 [2024-10-08 19:00:33.232686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:04.514 [2024-10-08 19:00:33.232701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.929 ms 00:33:04.514 [2024-10-08 19:00:33.232729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.514 [2024-10-08 19:00:33.232855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.514 [2024-10-08 19:00:33.232868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:04.514 [2024-10-08 19:00:33.232880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:04.514 [2024-10-08 19:00:33.232890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.514 [2024-10-08 19:00:33.233343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.514 [2024-10-08 19:00:33.233365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:04.514 [2024-10-08 19:00:33.233376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:33:04.514 [2024-10-08 19:00:33.233386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.514 [2024-10-08 19:00:33.233507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.514 [2024-10-08 19:00:33.233527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:04.514 [2024-10-08 19:00:33.233538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:33:04.514 [2024-10-08 19:00:33.233548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.514 [2024-10-08 19:00:33.252849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.514 [2024-10-08 19:00:33.252893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:04.514 [2024-10-08 19:00:33.252907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.276 ms 00:33:04.514 [2024-10-08 19:00:33.252918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.772 [2024-10-08 19:00:33.272799] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:33:04.772 [2024-10-08 19:00:33.272844] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:04.772 [2024-10-08 19:00:33.272860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.772 [2024-10-08 19:00:33.272872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:04.772 [2024-10-08 19:00:33.272884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.790 ms 00:33:04.772 [2024-10-08 19:00:33.272894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.302843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.302900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:04.773 [2024-10-08 19:00:33.302921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.843 ms 00:33:04.773 [2024-10-08 19:00:33.302949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.321387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.321426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:04.773 [2024-10-08 19:00:33.321439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.340 ms 00:33:04.773 [2024-10-08 19:00:33.321449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.339831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.339871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:04.773 [2024-10-08 19:00:33.339885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.276 ms 00:33:04.773 [2024-10-08 19:00:33.339895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.340715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.340752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:04.773 [2024-10-08 19:00:33.340765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:33:04.773 [2024-10-08 19:00:33.340775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.429407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.429479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:04.773 [2024-10-08 19:00:33.429496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.601 ms 00:33:04.773 [2024-10-08 19:00:33.429524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.441237] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:33:04.773 [2024-10-08 19:00:33.457949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.458014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:04.773 [2024-10-08 19:00:33.458031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.271 ms 00:33:04.773 [2024-10-08 19:00:33.458057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.458197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.458211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:04.773 [2024-10-08 19:00:33.458223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:04.773 [2024-10-08 19:00:33.458234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.458301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.458316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:04.773 [2024-10-08 19:00:33.458327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:04.773 [2024-10-08 19:00:33.458338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.458364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.458375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:04.773 [2024-10-08 19:00:33.458386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:04.773 [2024-10-08 19:00:33.458396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.458429] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:04.773 [2024-10-08 19:00:33.458441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.458452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:04.773 [2024-10-08 19:00:33.458465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:04.773 [2024-10-08 19:00:33.458476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.497466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.497554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:04.773 [2024-10-08 19:00:33.497571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.962 ms 00:33:04.773 [2024-10-08 19:00:33.497582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.497761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.773 [2024-10-08 19:00:33.497780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:04.773 [2024-10-08 19:00:33.497792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:33:04.773 [2024-10-08 19:00:33.497803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.773 [2024-10-08 19:00:33.498994] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:04.773 [2024-10-08 19:00:33.504115] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 403.142 ms, result 0 00:33:04.773 [2024-10-08 19:00:33.505034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:04.773 [2024-10-08 19:00:33.524748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:05.032  [2024-10-08T19:00:33.789Z] Copying: 4096/4096 [kB] (average 27 MBps)[2024-10-08 19:00:33.675468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:05.032 [2024-10-08 19:00:33.690222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.032 [2024-10-08 19:00:33.690269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:05.032 [2024-10-08 19:00:33.690285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:05.032 [2024-10-08 19:00:33.690311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.032 [2024-10-08 19:00:33.690336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:33:05.032 [2024-10-08 19:00:33.694500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.032 [2024-10-08 19:00:33.694530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:05.032 [2024-10-08 19:00:33.694543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.147 ms 00:33:05.032 [2024-10-08 19:00:33.694554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.032 [2024-10-08 19:00:33.697654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.032 [2024-10-08 19:00:33.697701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:05.032 [2024-10-08 19:00:33.697714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:33:05.032 [2024-10-08 19:00:33.697724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.032 [2024-10-08 19:00:33.700966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.032 [2024-10-08 19:00:33.701010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:05.032 [2024-10-08 19:00:33.701023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.223 ms 00:33:05.032 [2024-10-08 19:00:33.701034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.032 [2024-10-08 19:00:33.706855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.032 [2024-10-08 19:00:33.706891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:05.032 [2024-10-08 19:00:33.706909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.788 ms 00:33:05.032 [2024-10-08 19:00:33.706919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.032 [2024-10-08 19:00:33.743547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.032 [2024-10-08 19:00:33.743592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:05.032 [2024-10-08 19:00:33.743622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.553 ms 00:33:05.032 [2024-10-08 19:00:33.743632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.032 [2024-10-08 19:00:33.764928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.032 [2024-10-08 19:00:33.764978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:05.032 [2024-10-08 19:00:33.764992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.236 ms 00:33:05.032 [2024-10-08 19:00:33.765019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.032 [2024-10-08 19:00:33.765183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.032 [2024-10-08 19:00:33.765198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:05.032 [2024-10-08 19:00:33.765210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:33:05.032 [2024-10-08 19:00:33.765220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.292 [2024-10-08 19:00:33.802457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.292 [2024-10-08 19:00:33.802498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:05.292 [2024-10-08 19:00:33.802512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.212 ms 00:33:05.292 [2024-10-08 19:00:33.802538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.292 [2024-10-08 19:00:33.839906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.292 [2024-10-08 19:00:33.839953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:05.292 [2024-10-08 19:00:33.839978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.309 ms 00:33:05.292 [2024-10-08 19:00:33.839989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.292 [2024-10-08 19:00:33.876866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.292 [2024-10-08 19:00:33.876916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:05.292 [2024-10-08 19:00:33.876931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.815 ms 00:33:05.292 [2024-10-08 19:00:33.876941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.292 [2024-10-08 19:00:33.914054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.292 [2024-10-08 19:00:33.914098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:05.292 [2024-10-08 19:00:33.914129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.996 ms 00:33:05.292 [2024-10-08 19:00:33.914139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.292 [2024-10-08 19:00:33.914196] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:05.292 [2024-10-08 19:00:33.914214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:05.292 [2024-10-08 19:00:33.914592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.914995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:05.293 [2024-10-08 19:00:33.915406] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:05.293 [2024-10-08 19:00:33.915419] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e6e43eb-28c1-40da-a9ff-547ddd670846 00:33:05.293 [2024-10-08 19:00:33.915433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:05.293 [2024-10-08 19:00:33.915452] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:05.293 [2024-10-08 19:00:33.915462] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:05.293 [2024-10-08 19:00:33.915477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:05.293 [2024-10-08 19:00:33.915487] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:05.293 [2024-10-08 19:00:33.915497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:05.293 [2024-10-08 19:00:33.915508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:05.293 [2024-10-08 19:00:33.915517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:05.293 [2024-10-08 19:00:33.915526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:05.293 [2024-10-08 19:00:33.915536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.293 [2024-10-08 19:00:33.915547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:05.293 [2024-10-08 19:00:33.915558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:33:05.293 [2024-10-08 19:00:33.915569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.293 [2024-10-08 19:00:33.935864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.293 [2024-10-08 19:00:33.935910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:05.293 [2024-10-08 19:00:33.935924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.271 ms 00:33:05.293 [2024-10-08 19:00:33.935934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.293 [2024-10-08 19:00:33.936506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.293 [2024-10-08 19:00:33.936532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:05.293 [2024-10-08 19:00:33.936544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:33:05.293 [2024-10-08 19:00:33.936557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.293 [2024-10-08 19:00:33.985670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.293 [2024-10-08 19:00:33.985712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:05.293 [2024-10-08 19:00:33.985726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.293 [2024-10-08 19:00:33.985753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.293 [2024-10-08 19:00:33.985852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.293 [2024-10-08 19:00:33.985865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:05.293 [2024-10-08 19:00:33.985877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.293 [2024-10-08 19:00:33.985887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.293 [2024-10-08 19:00:33.985936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.293 [2024-10-08 19:00:33.985954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:05.293 [2024-10-08 19:00:33.985965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.293 [2024-10-08 19:00:33.985987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.293 [2024-10-08 19:00:33.986008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.293 [2024-10-08 19:00:33.986020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:05.294 [2024-10-08 19:00:33.986030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.294 [2024-10-08 19:00:33.986041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.119210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.553 [2024-10-08 19:00:34.119276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:05.553 [2024-10-08 19:00:34.119292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.553 [2024-10-08 19:00:34.119320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.227757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.553 [2024-10-08 19:00:34.227842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:05.553 [2024-10-08 19:00:34.227861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.553 [2024-10-08 19:00:34.227873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.227991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.553 [2024-10-08 19:00:34.228006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:05.553 [2024-10-08 19:00:34.228024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.553 [2024-10-08 19:00:34.228036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.228069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.553 [2024-10-08 19:00:34.228081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:05.553 [2024-10-08 19:00:34.228092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.553 [2024-10-08 19:00:34.228103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.228232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.553 [2024-10-08 19:00:34.228247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:05.553 [2024-10-08 19:00:34.228259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.553 [2024-10-08 19:00:34.228275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.228317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.553 [2024-10-08 19:00:34.228331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:05.553 [2024-10-08 19:00:34.228343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.553 [2024-10-08 19:00:34.228353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.228393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.553 [2024-10-08 19:00:34.228405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:05.553 [2024-10-08 19:00:34.228418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.553 [2024-10-08 19:00:34.228433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.228484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:05.553 [2024-10-08 19:00:34.228497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:05.553 [2024-10-08 19:00:34.228509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:05.553 [2024-10-08 19:00:34.228519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.553 [2024-10-08 19:00:34.228671] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 538.432 ms, result 0 00:33:06.945 00:33:06.945 00:33:06.945 19:00:35 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77130 00:33:06.945 19:00:35 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:33:06.945 19:00:35 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77130 00:33:06.945 19:00:35 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 77130 ']' 00:33:06.945 19:00:35 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.945 19:00:35 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:06.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.945 19:00:35 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.945 19:00:35 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:06.945 19:00:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:33:06.945 [2024-10-08 19:00:35.581902] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:33:06.945 [2024-10-08 19:00:35.582050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77130 ] 00:33:07.203 [2024-10-08 19:00:35.745656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.460 [2024-10-08 19:00:35.962896] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.394 19:00:36 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:08.394 19:00:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:33:08.394 19:00:36 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:33:08.653 [2024-10-08 19:00:37.175667] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:08.653 [2024-10-08 19:00:37.175746] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:08.653 [2024-10-08 19:00:37.366324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.653 [2024-10-08 19:00:37.366382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:08.653 [2024-10-08 19:00:37.366420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:08.653 [2024-10-08 19:00:37.366432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.653 [2024-10-08 19:00:37.370571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.653 [2024-10-08 19:00:37.370609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:08.653 [2024-10-08 19:00:37.370644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.108 ms 00:33:08.653 [2024-10-08 19:00:37.370655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.653 [2024-10-08 19:00:37.370773] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:08.653 [2024-10-08 19:00:37.371888] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:08.653 [2024-10-08 19:00:37.371926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.653 [2024-10-08 19:00:37.371939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:08.653 [2024-10-08 19:00:37.371953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:33:08.653 [2024-10-08 19:00:37.371984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.653 [2024-10-08 19:00:37.373490] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:08.653 [2024-10-08 19:00:37.394530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.653 [2024-10-08 19:00:37.394576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:08.653 [2024-10-08 19:00:37.394591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.045 ms 00:33:08.653 [2024-10-08 19:00:37.394607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.653 [2024-10-08 19:00:37.394725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.653 [2024-10-08 19:00:37.394751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:08.653 [2024-10-08 19:00:37.394763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:08.653 [2024-10-08 19:00:37.394778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.653 [2024-10-08 19:00:37.401673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.653 [2024-10-08 19:00:37.401715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:08.653 [2024-10-08 19:00:37.401728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:33:08.653 [2024-10-08 19:00:37.401744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.653 [2024-10-08 19:00:37.401891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.653 [2024-10-08 19:00:37.401911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:08.653 [2024-10-08 19:00:37.401923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:33:08.653 [2024-10-08 19:00:37.401938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.653 [2024-10-08 19:00:37.401987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.653 [2024-10-08 19:00:37.402007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:08.653 [2024-10-08 19:00:37.402018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:08.653 [2024-10-08 19:00:37.402032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.653 [2024-10-08 19:00:37.402060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:33:08.913 [2024-10-08 19:00:37.407070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.913 [2024-10-08 19:00:37.407099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:08.913 [2024-10-08 19:00:37.407116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.012 ms 00:33:08.913 [2024-10-08 19:00:37.407132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.913 [2024-10-08 19:00:37.407211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.913 [2024-10-08 19:00:37.407224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:08.913 [2024-10-08 19:00:37.407240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:08.913 [2024-10-08 19:00:37.407251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.913 [2024-10-08 19:00:37.407278] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:08.913 [2024-10-08 19:00:37.407302] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:08.913 [2024-10-08 19:00:37.407355] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:08.913 [2024-10-08 19:00:37.407382] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:08.913 [2024-10-08 19:00:37.407509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:08.913 [2024-10-08 19:00:37.407524] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:08.913 [2024-10-08 19:00:37.407547] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:08.913 [2024-10-08 19:00:37.407562] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:08.913 [2024-10-08 19:00:37.407581] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:08.913 [2024-10-08 19:00:37.407594] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:33:08.913 [2024-10-08 19:00:37.407611] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:08.913 [2024-10-08 19:00:37.407622] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:08.913 [2024-10-08 19:00:37.407644] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:08.913 [2024-10-08 19:00:37.407661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.913 [2024-10-08 19:00:37.407678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:08.913 [2024-10-08 19:00:37.407690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:33:08.913 [2024-10-08 19:00:37.407706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.913 [2024-10-08 19:00:37.407791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.913 [2024-10-08 19:00:37.407809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:08.913 [2024-10-08 19:00:37.407820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:08.913 [2024-10-08 19:00:37.407836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.913 [2024-10-08 19:00:37.407936] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:08.913 [2024-10-08 19:00:37.407974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:08.913 [2024-10-08 19:00:37.407988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:08.913 [2024-10-08 19:00:37.408005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:08.913 [2024-10-08 19:00:37.408032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:33:08.913 [2024-10-08 19:00:37.408066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:08.913 [2024-10-08 19:00:37.408077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:08.913 [2024-10-08 19:00:37.408104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:08.913 [2024-10-08 19:00:37.408120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:33:08.913 [2024-10-08 19:00:37.408130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:08.913 [2024-10-08 19:00:37.408146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:08.913 [2024-10-08 19:00:37.408159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:33:08.913 [2024-10-08 19:00:37.408175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:08.913 [2024-10-08 19:00:37.408201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:33:08.913 [2024-10-08 19:00:37.408224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:08.913 [2024-10-08 19:00:37.408252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.913 [2024-10-08 19:00:37.408278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:08.913 [2024-10-08 19:00:37.408298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.913 [2024-10-08 19:00:37.408338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:08.913 [2024-10-08 19:00:37.408349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.913 [2024-10-08 19:00:37.408375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:08.913 [2024-10-08 19:00:37.408388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:08.913 [2024-10-08 19:00:37.408412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:08.913 [2024-10-08 19:00:37.408422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:08.913 [2024-10-08 19:00:37.408447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:08.913 [2024-10-08 19:00:37.408459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:33:08.913 [2024-10-08 19:00:37.408469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:08.913 [2024-10-08 19:00:37.408482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:08.913 [2024-10-08 19:00:37.408492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:33:08.913 [2024-10-08 19:00:37.408508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:08.913 [2024-10-08 19:00:37.408530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:33:08.913 [2024-10-08 19:00:37.408541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.913 [2024-10-08 19:00:37.408553] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:08.914 [2024-10-08 19:00:37.408564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:08.914 [2024-10-08 19:00:37.408578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:08.914 [2024-10-08 19:00:37.408589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:08.914 [2024-10-08 19:00:37.408603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:08.914 [2024-10-08 19:00:37.408614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:08.914 [2024-10-08 19:00:37.408639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:08.914 [2024-10-08 19:00:37.408649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:08.914 [2024-10-08 19:00:37.408660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:08.914 [2024-10-08 19:00:37.408670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:08.914 [2024-10-08 19:00:37.408683] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:08.914 [2024-10-08 19:00:37.408696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.914 [2024-10-08 19:00:37.408712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:33:08.914 [2024-10-08 19:00:37.408723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:33:08.914 [2024-10-08 19:00:37.408736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:33:08.914 [2024-10-08 19:00:37.408746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:33:08.914 [2024-10-08 19:00:37.408760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:33:08.914 [2024-10-08 19:00:37.408771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:33:08.914 [2024-10-08 19:00:37.408784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:33:08.914 [2024-10-08 19:00:37.408795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:33:08.914 [2024-10-08 19:00:37.408808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:33:08.914 [2024-10-08 19:00:37.408819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:33:08.914 [2024-10-08 19:00:37.408832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:33:08.914 [2024-10-08 19:00:37.408842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:33:08.914 [2024-10-08 19:00:37.408855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:33:08.914 [2024-10-08 19:00:37.408865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:33:08.914 [2024-10-08 19:00:37.408878] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:08.914 [2024-10-08 19:00:37.408890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:08.914 [2024-10-08 19:00:37.408909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:08.914 [2024-10-08 19:00:37.408921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:08.914 [2024-10-08 19:00:37.408933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:08.914 [2024-10-08 19:00:37.408944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:08.914 [2024-10-08 19:00:37.408957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.408976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:08.914 [2024-10-08 19:00:37.408990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:33:08.914 [2024-10-08 19:00:37.409000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.449621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.449659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:08.914 [2024-10-08 19:00:37.449696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.552 ms 00:33:08.914 [2024-10-08 19:00:37.449709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.449860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.449874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:08.914 [2024-10-08 19:00:37.449892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:08.914 [2024-10-08 19:00:37.449903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.519849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.519892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:08.914 [2024-10-08 19:00:37.519915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.913 ms 00:33:08.914 [2024-10-08 19:00:37.519929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.520071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.520089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:08.914 [2024-10-08 19:00:37.520107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:08.914 [2024-10-08 19:00:37.520123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.520590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.520612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:08.914 [2024-10-08 19:00:37.520630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:33:08.914 [2024-10-08 19:00:37.520643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.520796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.520812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:08.914 [2024-10-08 19:00:37.520828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:33:08.914 [2024-10-08 19:00:37.520842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.546458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.546498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:08.914 [2024-10-08 19:00:37.546518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.575 ms 00:33:08.914 [2024-10-08 19:00:37.546536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.568650] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:08.914 [2024-10-08 19:00:37.568695] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:08.914 [2024-10-08 19:00:37.568717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.568731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:08.914 [2024-10-08 19:00:37.568749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.039 ms 00:33:08.914 [2024-10-08 19:00:37.568761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.601679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.601727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:08.914 [2024-10-08 19:00:37.601747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.812 ms 00:33:08.914 [2024-10-08 19:00:37.601771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.622211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.622250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:08.914 [2024-10-08 19:00:37.622275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.356 ms 00:33:08.914 [2024-10-08 19:00:37.622287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.642461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.642497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:08.914 [2024-10-08 19:00:37.642516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.054 ms 00:33:08.914 [2024-10-08 19:00:37.642527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.914 [2024-10-08 19:00:37.643498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.914 [2024-10-08 19:00:37.643527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:08.914 [2024-10-08 19:00:37.643546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:33:08.914 [2024-10-08 19:00:37.643564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.738098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.173 [2024-10-08 19:00:37.738157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:09.173 [2024-10-08 19:00:37.738179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.497 ms 00:33:09.173 [2024-10-08 19:00:37.738194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.750786] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:33:09.173 [2024-10-08 19:00:37.767911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.173 [2024-10-08 19:00:37.767987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:09.173 [2024-10-08 19:00:37.768003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.569 ms 00:33:09.173 [2024-10-08 19:00:37.768017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.768157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.173 [2024-10-08 19:00:37.768175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:09.173 [2024-10-08 19:00:37.768189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:09.173 [2024-10-08 19:00:37.768203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.768266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.173 [2024-10-08 19:00:37.768282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:09.173 [2024-10-08 19:00:37.768294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:09.173 [2024-10-08 19:00:37.768307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.768334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.173 [2024-10-08 19:00:37.768348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:09.173 [2024-10-08 19:00:37.768360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:09.173 [2024-10-08 19:00:37.768382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.768420] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:09.173 [2024-10-08 19:00:37.768442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.173 [2024-10-08 19:00:37.768454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:09.173 [2024-10-08 19:00:37.768468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:09.173 [2024-10-08 19:00:37.768479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.807494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.173 [2024-10-08 19:00:37.807534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:09.173 [2024-10-08 19:00:37.807568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.983 ms 00:33:09.173 [2024-10-08 19:00:37.807580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.807720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.173 [2024-10-08 19:00:37.807733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:09.173 [2024-10-08 19:00:37.807748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:09.173 [2024-10-08 19:00:37.807757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.173 [2024-10-08 19:00:37.808859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:09.173 [2024-10-08 19:00:37.813636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 442.110 ms, result 0 00:33:09.173 [2024-10-08 19:00:37.814718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:09.173 Some configs were skipped because the RPC state that can call them passed over. 00:33:09.173 19:00:37 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:33:09.432 [2024-10-08 19:00:38.122992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.432 [2024-10-08 19:00:38.123056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:33:09.432 [2024-10-08 19:00:38.123076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:33:09.432 [2024-10-08 19:00:38.123091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.432 [2024-10-08 19:00:38.123130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.513 ms, result 0 00:33:09.432 true 00:33:09.432 19:00:38 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:33:09.690 [2024-10-08 19:00:38.322925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:09.690 [2024-10-08 19:00:38.322993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:33:09.690 [2024-10-08 19:00:38.323014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:33:09.690 [2024-10-08 19:00:38.323027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:09.690 [2024-10-08 19:00:38.323074] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.249 ms, result 0 00:33:09.690 true 00:33:09.690 19:00:38 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77130 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 77130 ']' 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 77130 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77130 00:33:09.690 killing process with pid 77130 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77130' 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 77130 00:33:09.690 19:00:38 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 77130 00:33:11.065 [2024-10-08 19:00:39.577452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.577544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:11.065 [2024-10-08 19:00:39.577561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:11.065 [2024-10-08 19:00:39.577575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.577601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:33:11.065 [2024-10-08 19:00:39.582040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.582075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:11.065 [2024-10-08 19:00:39.582093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:33:11.065 [2024-10-08 19:00:39.582105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.582372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.582403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:11.065 [2024-10-08 19:00:39.582417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:33:11.065 [2024-10-08 19:00:39.582432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.586034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.586074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:11.065 [2024-10-08 19:00:39.586106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.577 ms 00:33:11.065 [2024-10-08 19:00:39.586118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.592243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.592280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:11.065 [2024-10-08 19:00:39.592297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.064 ms 00:33:11.065 [2024-10-08 19:00:39.592310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.608261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.608301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:11.065 [2024-10-08 19:00:39.608322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.876 ms 00:33:11.065 [2024-10-08 19:00:39.608333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.618812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.618851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:11.065 [2024-10-08 19:00:39.618885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.402 ms 00:33:11.065 [2024-10-08 19:00:39.618908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.619070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.619086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:11.065 [2024-10-08 19:00:39.619101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:11.065 [2024-10-08 19:00:39.619115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.635349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.635388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:11.065 [2024-10-08 19:00:39.635425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.201 ms 00:33:11.065 [2024-10-08 19:00:39.635436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.651433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.651477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:11.065 [2024-10-08 19:00:39.651505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.925 ms 00:33:11.065 [2024-10-08 19:00:39.651531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.666689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.666726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:11.065 [2024-10-08 19:00:39.666745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.095 ms 00:33:11.065 [2024-10-08 19:00:39.666756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.682764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.065 [2024-10-08 19:00:39.682800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:11.065 [2024-10-08 19:00:39.682820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.898 ms 00:33:11.065 [2024-10-08 19:00:39.682830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.065 [2024-10-08 19:00:39.682902] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:11.065 [2024-10-08 19:00:39.682925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.682943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.682966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.682982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.682993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.683990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.684003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.684022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.684035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.684052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.684066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.684083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.684096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:11.065 [2024-10-08 19:00:39.684115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:11.066 [2024-10-08 19:00:39.684452] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:11.066 [2024-10-08 19:00:39.684475] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e6e43eb-28c1-40da-a9ff-547ddd670846 00:33:11.066 [2024-10-08 19:00:39.684488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:11.066 [2024-10-08 19:00:39.684505] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:11.066 [2024-10-08 19:00:39.684517] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:11.066 [2024-10-08 19:00:39.684536] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:11.066 [2024-10-08 19:00:39.684562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:11.066 [2024-10-08 19:00:39.684581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:11.066 [2024-10-08 19:00:39.684610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:11.066 [2024-10-08 19:00:39.684626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:11.066 [2024-10-08 19:00:39.684636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:11.066 [2024-10-08 19:00:39.684652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.066 [2024-10-08 19:00:39.684664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:11.066 [2024-10-08 19:00:39.684681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:33:11.066 [2024-10-08 19:00:39.684693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.066 [2024-10-08 19:00:39.706040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.066 [2024-10-08 19:00:39.706077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:11.066 [2024-10-08 19:00:39.706101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.312 ms 00:33:11.066 [2024-10-08 19:00:39.706129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.066 [2024-10-08 19:00:39.706784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.066 [2024-10-08 19:00:39.706809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:11.066 [2024-10-08 19:00:39.706827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:33:11.066 [2024-10-08 19:00:39.706838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.066 [2024-10-08 19:00:39.774296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.066 [2024-10-08 19:00:39.774344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.066 [2024-10-08 19:00:39.774364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.066 [2024-10-08 19:00:39.774397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.066 [2024-10-08 19:00:39.774506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.066 [2024-10-08 19:00:39.774520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.066 [2024-10-08 19:00:39.774537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.066 [2024-10-08 19:00:39.774549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.066 [2024-10-08 19:00:39.774610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.066 [2024-10-08 19:00:39.774624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.066 [2024-10-08 19:00:39.774647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.066 [2024-10-08 19:00:39.774658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.066 [2024-10-08 19:00:39.774691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.066 [2024-10-08 19:00:39.774703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.066 [2024-10-08 19:00:39.774720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.066 [2024-10-08 19:00:39.774731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.323 [2024-10-08 19:00:39.904384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.323 [2024-10-08 19:00:39.904449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.323 [2024-10-08 19:00:39.904471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.323 [2024-10-08 19:00:39.904499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.323 [2024-10-08 19:00:40.016052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.323 [2024-10-08 19:00:40.016111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.323 [2024-10-08 19:00:40.016131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.323 [2024-10-08 19:00:40.016159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.323 [2024-10-08 19:00:40.016284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.323 [2024-10-08 19:00:40.016298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:11.323 [2024-10-08 19:00:40.016321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.323 [2024-10-08 19:00:40.016333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.323 [2024-10-08 19:00:40.016370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.323 [2024-10-08 19:00:40.016388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:11.323 [2024-10-08 19:00:40.016405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.323 [2024-10-08 19:00:40.016415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.323 [2024-10-08 19:00:40.016546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.323 [2024-10-08 19:00:40.016560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:11.323 [2024-10-08 19:00:40.016578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.323 [2024-10-08 19:00:40.016589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.323 [2024-10-08 19:00:40.016638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.323 [2024-10-08 19:00:40.016652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:11.323 [2024-10-08 19:00:40.016674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.323 [2024-10-08 19:00:40.016686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.323 [2024-10-08 19:00:40.016731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.323 [2024-10-08 19:00:40.016744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:11.323 [2024-10-08 19:00:40.016766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.323 [2024-10-08 19:00:40.016777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.324 [2024-10-08 19:00:40.016829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.324 [2024-10-08 19:00:40.016848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:11.324 [2024-10-08 19:00:40.016865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.324 [2024-10-08 19:00:40.016876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.324 [2024-10-08 19:00:40.017043] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.558 ms, result 0 00:33:12.697 19:00:41 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:12.697 [2024-10-08 19:00:41.372621] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:33:12.697 [2024-10-08 19:00:41.373140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77202 ] 00:33:12.955 [2024-10-08 19:00:41.581533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.263 [2024-10-08 19:00:41.826819] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.520 [2024-10-08 19:00:42.212919] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:13.520 [2024-10-08 19:00:42.213003] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:13.778 [2024-10-08 19:00:42.377269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.377710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:13.778 [2024-10-08 19:00:42.377821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:13.778 [2024-10-08 19:00:42.377893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.381417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.381555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:13.778 [2024-10-08 19:00:42.381628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:33:13.778 [2024-10-08 19:00:42.381687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.382132] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:13.778 [2024-10-08 19:00:42.383417] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:13.778 [2024-10-08 19:00:42.383549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.383614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:13.778 [2024-10-08 19:00:42.383678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:33:13.778 [2024-10-08 19:00:42.383740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.385453] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:13.778 [2024-10-08 19:00:42.406905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.407047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:13.778 [2024-10-08 19:00:42.407116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.452 ms 00:33:13.778 [2024-10-08 19:00:42.407176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.407474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.407561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:13.778 [2024-10-08 19:00:42.407643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:13.778 [2024-10-08 19:00:42.407712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.414780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.414892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:13.778 [2024-10-08 19:00:42.414991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.971 ms 00:33:13.778 [2024-10-08 19:00:42.415078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.415209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.415230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:13.778 [2024-10-08 19:00:42.415244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:33:13.778 [2024-10-08 19:00:42.415256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.415292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.415304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:13.778 [2024-10-08 19:00:42.415317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:13.778 [2024-10-08 19:00:42.415329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.415356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:33:13.778 [2024-10-08 19:00:42.420816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.420849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:13.778 [2024-10-08 19:00:42.420863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.467 ms 00:33:13.778 [2024-10-08 19:00:42.420874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.420950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.420982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:13.778 [2024-10-08 19:00:42.420994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:13.778 [2024-10-08 19:00:42.421005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.421032] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:13.778 [2024-10-08 19:00:42.421056] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:13.778 [2024-10-08 19:00:42.421095] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:13.778 [2024-10-08 19:00:42.421116] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:13.778 [2024-10-08 19:00:42.421218] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:13.778 [2024-10-08 19:00:42.421233] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:13.778 [2024-10-08 19:00:42.421247] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:13.778 [2024-10-08 19:00:42.421262] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:13.778 [2024-10-08 19:00:42.421275] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:13.778 [2024-10-08 19:00:42.421288] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:33:13.778 [2024-10-08 19:00:42.421299] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:13.778 [2024-10-08 19:00:42.421310] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:13.778 [2024-10-08 19:00:42.421321] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:13.778 [2024-10-08 19:00:42.421332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.421343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:13.778 [2024-10-08 19:00:42.421358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:33:13.778 [2024-10-08 19:00:42.421369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.421454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.778 [2024-10-08 19:00:42.421466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:13.778 [2024-10-08 19:00:42.421477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:33:13.778 [2024-10-08 19:00:42.421488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.778 [2024-10-08 19:00:42.421587] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:13.778 [2024-10-08 19:00:42.421601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:13.778 [2024-10-08 19:00:42.421612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:13.778 [2024-10-08 19:00:42.421627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:13.778 [2024-10-08 19:00:42.421639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:13.778 [2024-10-08 19:00:42.421649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:13.778 [2024-10-08 19:00:42.421660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:33:13.778 [2024-10-08 19:00:42.421672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:13.778 [2024-10-08 19:00:42.421683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:33:13.778 [2024-10-08 19:00:42.421693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:13.778 [2024-10-08 19:00:42.421704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:13.778 [2024-10-08 19:00:42.421727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:33:13.778 [2024-10-08 19:00:42.421737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:13.778 [2024-10-08 19:00:42.421747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:13.778 [2024-10-08 19:00:42.421758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:33:13.778 [2024-10-08 19:00:42.421769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:13.778 [2024-10-08 19:00:42.421780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:13.778 [2024-10-08 19:00:42.421790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:33:13.778 [2024-10-08 19:00:42.421801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:13.778 [2024-10-08 19:00:42.421812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:13.778 [2024-10-08 19:00:42.421822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:33:13.778 [2024-10-08 19:00:42.421832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:13.778 [2024-10-08 19:00:42.421842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:13.778 [2024-10-08 19:00:42.421853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:33:13.778 [2024-10-08 19:00:42.421862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:13.778 [2024-10-08 19:00:42.421872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:13.778 [2024-10-08 19:00:42.421883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:33:13.779 [2024-10-08 19:00:42.421893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:13.779 [2024-10-08 19:00:42.421903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:13.779 [2024-10-08 19:00:42.421913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:33:13.779 [2024-10-08 19:00:42.421922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:13.779 [2024-10-08 19:00:42.421933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:13.779 [2024-10-08 19:00:42.421943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:33:13.779 [2024-10-08 19:00:42.421952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:13.779 [2024-10-08 19:00:42.421975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:13.779 [2024-10-08 19:00:42.421985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:33:13.779 [2024-10-08 19:00:42.421995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:13.779 [2024-10-08 19:00:42.422006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:13.779 [2024-10-08 19:00:42.422016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:33:13.779 [2024-10-08 19:00:42.422026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:13.779 [2024-10-08 19:00:42.422037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:13.779 [2024-10-08 19:00:42.422047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:33:13.779 [2024-10-08 19:00:42.422058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:13.779 [2024-10-08 19:00:42.422068] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:13.779 [2024-10-08 19:00:42.422079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:13.779 [2024-10-08 19:00:42.422090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:13.779 [2024-10-08 19:00:42.422101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:13.779 [2024-10-08 19:00:42.422112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:13.779 [2024-10-08 19:00:42.422122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:13.779 [2024-10-08 19:00:42.422132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:13.779 [2024-10-08 19:00:42.422142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:13.779 [2024-10-08 19:00:42.422152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:13.779 [2024-10-08 19:00:42.422163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:13.779 [2024-10-08 19:00:42.422174] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:13.779 [2024-10-08 19:00:42.422187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:13.779 [2024-10-08 19:00:42.422205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:33:13.779 [2024-10-08 19:00:42.422218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:33:13.779 [2024-10-08 19:00:42.422229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:33:13.779 [2024-10-08 19:00:42.422241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:33:13.779 [2024-10-08 19:00:42.422252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:33:13.779 [2024-10-08 19:00:42.422263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:33:13.779 [2024-10-08 19:00:42.422274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:33:13.779 [2024-10-08 19:00:42.422286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:33:13.779 [2024-10-08 19:00:42.422298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:33:13.779 [2024-10-08 19:00:42.422309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:33:13.779 [2024-10-08 19:00:42.422320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:33:13.779 [2024-10-08 19:00:42.422331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:33:13.779 [2024-10-08 19:00:42.422342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:33:13.779 [2024-10-08 19:00:42.422354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:33:13.779 [2024-10-08 19:00:42.422365] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:13.779 [2024-10-08 19:00:42.422377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:13.779 [2024-10-08 19:00:42.422390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:13.779 [2024-10-08 19:00:42.422402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:13.779 [2024-10-08 19:00:42.422413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:13.779 [2024-10-08 19:00:42.422425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:13.779 [2024-10-08 19:00:42.422440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.779 [2024-10-08 19:00:42.422455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:13.779 [2024-10-08 19:00:42.422465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:33:13.779 [2024-10-08 19:00:42.422476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.779 [2024-10-08 19:00:42.477214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.779 [2024-10-08 19:00:42.477265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:13.779 [2024-10-08 19:00:42.477298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.676 ms 00:33:13.779 [2024-10-08 19:00:42.477310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.779 [2024-10-08 19:00:42.477498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.779 [2024-10-08 19:00:42.477513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:13.779 [2024-10-08 19:00:42.477526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:33:13.779 [2024-10-08 19:00:42.477537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.779 [2024-10-08 19:00:42.527904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.779 [2024-10-08 19:00:42.527949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:13.779 [2024-10-08 19:00:42.527976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.339 ms 00:33:13.779 [2024-10-08 19:00:42.527987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.779 [2024-10-08 19:00:42.528092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.779 [2024-10-08 19:00:42.528106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:13.779 [2024-10-08 19:00:42.528118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:13.779 [2024-10-08 19:00:42.528129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.779 [2024-10-08 19:00:42.528606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.779 [2024-10-08 19:00:42.528626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:13.779 [2024-10-08 19:00:42.528638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:33:13.779 [2024-10-08 19:00:42.528649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:13.779 [2024-10-08 19:00:42.528778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:13.779 [2024-10-08 19:00:42.528792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:13.779 [2024-10-08 19:00:42.528804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:33:13.779 [2024-10-08 19:00:42.528815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.549220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.549264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:14.036 [2024-10-08 19:00:42.549280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.379 ms 00:33:14.036 [2024-10-08 19:00:42.549292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.569561] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:14.036 [2024-10-08 19:00:42.569603] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:14.036 [2024-10-08 19:00:42.569619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.569630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:14.036 [2024-10-08 19:00:42.569642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.168 ms 00:33:14.036 [2024-10-08 19:00:42.569652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.602258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.602300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:14.036 [2024-10-08 19:00:42.602322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.515 ms 00:33:14.036 [2024-10-08 19:00:42.602334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.622954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.622999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:14.036 [2024-10-08 19:00:42.623015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.503 ms 00:33:14.036 [2024-10-08 19:00:42.623025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.643520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.643557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:14.036 [2024-10-08 19:00:42.643572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.353 ms 00:33:14.036 [2024-10-08 19:00:42.643583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.644536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.644567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:14.036 [2024-10-08 19:00:42.644582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:33:14.036 [2024-10-08 19:00:42.644605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.735209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.735275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:14.036 [2024-10-08 19:00:42.735293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.571 ms 00:33:14.036 [2024-10-08 19:00:42.735304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.747217] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:33:14.036 [2024-10-08 19:00:42.763901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.763967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:14.036 [2024-10-08 19:00:42.763984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.444 ms 00:33:14.036 [2024-10-08 19:00:42.763995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.764123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.764138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:14.036 [2024-10-08 19:00:42.764150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:14.036 [2024-10-08 19:00:42.764161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.764220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.764234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:14.036 [2024-10-08 19:00:42.764245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:14.036 [2024-10-08 19:00:42.764255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.764279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.764290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:14.036 [2024-10-08 19:00:42.764301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:14.036 [2024-10-08 19:00:42.764311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.036 [2024-10-08 19:00:42.764348] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:14.036 [2024-10-08 19:00:42.764378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.036 [2024-10-08 19:00:42.764389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:14.036 [2024-10-08 19:00:42.764404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:14.036 [2024-10-08 19:00:42.764415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.294 [2024-10-08 19:00:42.802529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.294 [2024-10-08 19:00:42.802571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:14.294 [2024-10-08 19:00:42.802586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.089 ms 00:33:14.294 [2024-10-08 19:00:42.802597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.294 [2024-10-08 19:00:42.802716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:14.294 [2024-10-08 19:00:42.802734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:14.294 [2024-10-08 19:00:42.802745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:14.294 [2024-10-08 19:00:42.802755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:14.294 [2024-10-08 19:00:42.803745] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:14.294 [2024-10-08 19:00:42.808411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.111 ms, result 0 00:33:14.294 [2024-10-08 19:00:42.809326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:14.294 [2024-10-08 19:00:42.828785] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:15.229  [2024-10-08T19:00:44.922Z] Copying: 34/256 [MB] (34 MBps) [2024-10-08T19:00:46.314Z] Copying: 66/256 [MB] (31 MBps) [2024-10-08T19:00:47.253Z] Copying: 96/256 [MB] (30 MBps) [2024-10-08T19:00:48.187Z] Copying: 127/256 [MB] (31 MBps) [2024-10-08T19:00:49.154Z] Copying: 159/256 [MB] (31 MBps) [2024-10-08T19:00:50.089Z] Copying: 190/256 [MB] (31 MBps) [2024-10-08T19:00:51.022Z] Copying: 221/256 [MB] (30 MBps) [2024-10-08T19:00:51.280Z] Copying: 251/256 [MB] (30 MBps) [2024-10-08T19:00:51.539Z] Copying: 256/256 [MB] (average 31 MBps)[2024-10-08 19:00:51.365191] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:22.782 [2024-10-08 19:00:51.382183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.382239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:22.782 [2024-10-08 19:00:51.382255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:22.782 [2024-10-08 19:00:51.382265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.382294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:33:22.782 [2024-10-08 19:00:51.386844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.386877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:22.782 [2024-10-08 19:00:51.386890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.532 ms 00:33:22.782 [2024-10-08 19:00:51.386901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.387166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.387185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:22.782 [2024-10-08 19:00:51.387197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:33:22.782 [2024-10-08 19:00:51.387207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.390404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.390428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:22.782 [2024-10-08 19:00:51.390440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.180 ms 00:33:22.782 [2024-10-08 19:00:51.390451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.396317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.396351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:22.782 [2024-10-08 19:00:51.396369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.843 ms 00:33:22.782 [2024-10-08 19:00:51.396380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.435389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.435438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:22.782 [2024-10-08 19:00:51.435461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.929 ms 00:33:22.782 [2024-10-08 19:00:51.435471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.456302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.456350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:22.782 [2024-10-08 19:00:51.456366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.761 ms 00:33:22.782 [2024-10-08 19:00:51.456377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.456530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.456545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:22.782 [2024-10-08 19:00:51.456557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:33:22.782 [2024-10-08 19:00:51.456567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.495438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.495666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:22.782 [2024-10-08 19:00:51.495692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.844 ms 00:33:22.782 [2024-10-08 19:00:51.495704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.782 [2024-10-08 19:00:51.533346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.782 [2024-10-08 19:00:51.533498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:22.783 [2024-10-08 19:00:51.533635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.561 ms 00:33:22.783 [2024-10-08 19:00:51.533676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.042 [2024-10-08 19:00:51.570222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.042 [2024-10-08 19:00:51.570375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:23.042 [2024-10-08 19:00:51.570497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.392 ms 00:33:23.042 [2024-10-08 19:00:51.570534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.042 [2024-10-08 19:00:51.607539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.042 [2024-10-08 19:00:51.607743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:23.042 [2024-10-08 19:00:51.607861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.890 ms 00:33:23.042 [2024-10-08 19:00:51.607903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.042 [2024-10-08 19:00:51.608000] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:23.042 [2024-10-08 19:00:51.608058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:23.042 [2024-10-08 19:00:51.608858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.608950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.609973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.610947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:23.043 [2024-10-08 19:00:51.611738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:23.044 [2024-10-08 19:00:51.611911] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:23.044 [2024-10-08 19:00:51.611924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e6e43eb-28c1-40da-a9ff-547ddd670846 00:33:23.044 [2024-10-08 19:00:51.611936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:23.044 [2024-10-08 19:00:51.611947] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:23.044 [2024-10-08 19:00:51.612211] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:23.044 [2024-10-08 19:00:51.612265] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:23.044 [2024-10-08 19:00:51.612299] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:23.044 [2024-10-08 19:00:51.612382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:23.044 [2024-10-08 19:00:51.612420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:23.044 [2024-10-08 19:00:51.612452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:23.044 [2024-10-08 19:00:51.612483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:23.044 [2024-10-08 19:00:51.612577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.044 [2024-10-08 19:00:51.612611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:23.044 [2024-10-08 19:00:51.612677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.578 ms 00:33:23.044 [2024-10-08 19:00:51.612710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.044 [2024-10-08 19:00:51.633758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.044 [2024-10-08 19:00:51.633902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:23.044 [2024-10-08 19:00:51.634041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.000 ms 00:33:23.044 [2024-10-08 19:00:51.634080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.044 [2024-10-08 19:00:51.634686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.044 [2024-10-08 19:00:51.634781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:23.044 [2024-10-08 19:00:51.634798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:33:23.044 [2024-10-08 19:00:51.634810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.044 [2024-10-08 19:00:51.685186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.044 [2024-10-08 19:00:51.685351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:23.044 [2024-10-08 19:00:51.685482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.044 [2024-10-08 19:00:51.685538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.044 [2024-10-08 19:00:51.685651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.044 [2024-10-08 19:00:51.685696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:23.044 [2024-10-08 19:00:51.685792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.044 [2024-10-08 19:00:51.685832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.044 [2024-10-08 19:00:51.685978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.044 [2024-10-08 19:00:51.686039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:23.044 [2024-10-08 19:00:51.686192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.044 [2024-10-08 19:00:51.686233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.044 [2024-10-08 19:00:51.686281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.044 [2024-10-08 19:00:51.686316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:23.044 [2024-10-08 19:00:51.686478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.044 [2024-10-08 19:00:51.686518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.304 [2024-10-08 19:00:51.815848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.304 [2024-10-08 19:00:51.816111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:23.304 [2024-10-08 19:00:51.816230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.305 [2024-10-08 19:00:51.816247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.305 [2024-10-08 19:00:51.921976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.305 [2024-10-08 19:00:51.922026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:23.305 [2024-10-08 19:00:51.922041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.305 [2024-10-08 19:00:51.922052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.305 [2024-10-08 19:00:51.922156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.305 [2024-10-08 19:00:51.922168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:23.305 [2024-10-08 19:00:51.922179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.305 [2024-10-08 19:00:51.922194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.305 [2024-10-08 19:00:51.922224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.305 [2024-10-08 19:00:51.922235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:23.305 [2024-10-08 19:00:51.922246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.305 [2024-10-08 19:00:51.922256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.305 [2024-10-08 19:00:51.922374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.305 [2024-10-08 19:00:51.922388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:23.305 [2024-10-08 19:00:51.922399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.305 [2024-10-08 19:00:51.922413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.305 [2024-10-08 19:00:51.922450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.305 [2024-10-08 19:00:51.922462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:23.305 [2024-10-08 19:00:51.922473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.305 [2024-10-08 19:00:51.922483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.305 [2024-10-08 19:00:51.922522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.305 [2024-10-08 19:00:51.922538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:23.305 [2024-10-08 19:00:51.922549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.305 [2024-10-08 19:00:51.922559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.305 [2024-10-08 19:00:51.922608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.305 [2024-10-08 19:00:51.922630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:23.305 [2024-10-08 19:00:51.922640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.305 [2024-10-08 19:00:51.922650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.305 [2024-10-08 19:00:51.922793] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 540.612 ms, result 0 00:33:24.682 00:33:24.682 00:33:24.682 19:00:53 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:24.941 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:33:24.941 19:00:53 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:33:24.941 19:00:53 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:33:24.941 19:00:53 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:24.941 19:00:53 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:24.941 19:00:53 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:33:25.233 19:00:53 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:33:25.233 19:00:53 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77130 00:33:25.233 19:00:53 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 77130 ']' 00:33:25.233 19:00:53 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 77130 00:33:25.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77130) - No such process 00:33:25.233 Process with pid 77130 is not found 00:33:25.233 19:00:53 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 77130 is not found' 00:33:25.233 00:33:25.233 real 1m8.138s 00:33:25.233 user 1m31.859s 00:33:25.233 sys 0m7.228s 00:33:25.233 ************************************ 00:33:25.233 END TEST ftl_trim 00:33:25.233 ************************************ 00:33:25.233 19:00:53 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:25.233 19:00:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:33:25.233 19:00:53 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:33:25.233 19:00:53 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:25.233 19:00:53 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:25.233 19:00:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:25.233 ************************************ 00:33:25.233 START TEST ftl_restore 00:33:25.233 ************************************ 00:33:25.233 19:00:53 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:33:25.233 * Looking for test storage... 00:33:25.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:25.233 19:00:53 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:25.233 19:00:53 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:33:25.233 19:00:53 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:25.494 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.494 19:00:54 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:33:25.494 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.494 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:25.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.495 --rc genhtml_branch_coverage=1 00:33:25.495 --rc genhtml_function_coverage=1 00:33:25.495 --rc genhtml_legend=1 00:33:25.495 --rc geninfo_all_blocks=1 00:33:25.495 --rc geninfo_unexecuted_blocks=1 00:33:25.495 00:33:25.495 ' 00:33:25.495 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:25.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.495 --rc genhtml_branch_coverage=1 00:33:25.495 --rc genhtml_function_coverage=1 00:33:25.495 --rc genhtml_legend=1 00:33:25.495 --rc geninfo_all_blocks=1 00:33:25.495 --rc geninfo_unexecuted_blocks=1 00:33:25.495 00:33:25.495 ' 00:33:25.495 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:25.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.495 --rc genhtml_branch_coverage=1 00:33:25.495 --rc genhtml_function_coverage=1 00:33:25.495 --rc genhtml_legend=1 00:33:25.495 --rc geninfo_all_blocks=1 00:33:25.495 --rc geninfo_unexecuted_blocks=1 00:33:25.495 00:33:25.495 ' 00:33:25.495 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:25.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.495 --rc genhtml_branch_coverage=1 00:33:25.495 --rc genhtml_function_coverage=1 00:33:25.495 --rc genhtml_legend=1 00:33:25.495 --rc geninfo_all_blocks=1 00:33:25.495 --rc geninfo_unexecuted_blocks=1 00:33:25.495 00:33:25.495 ' 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:25.495 19:00:54 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.bdoMz9AuUi 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77397 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:25.496 19:00:54 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77397 00:33:25.496 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 77397 ']' 00:33:25.496 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.496 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:25.496 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.496 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:25.496 19:00:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:33:25.496 [2024-10-08 19:00:54.244761] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:33:25.496 [2024-10-08 19:00:54.244950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77397 ] 00:33:25.763 [2024-10-08 19:00:54.409051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.021 [2024-10-08 19:00:54.612280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.958 19:00:55 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:26.958 19:00:55 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:33:26.958 19:00:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:26.958 19:00:55 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:33:26.958 19:00:55 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:26.958 19:00:55 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:33:26.958 19:00:55 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:33:26.958 19:00:55 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:27.217 19:00:55 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:27.217 19:00:55 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:33:27.217 19:00:55 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:27.217 19:00:55 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:33:27.217 19:00:55 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:27.217 19:00:55 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:33:27.217 19:00:55 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:33:27.217 19:00:55 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:27.475 19:00:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:27.475 { 00:33:27.475 "name": "nvme0n1", 00:33:27.475 "aliases": [ 00:33:27.475 "76a4669d-f8c2-4f5b-a08d-fe2328e3f93d" 00:33:27.475 ], 00:33:27.475 "product_name": "NVMe disk", 00:33:27.475 "block_size": 4096, 00:33:27.475 "num_blocks": 1310720, 00:33:27.475 "uuid": "76a4669d-f8c2-4f5b-a08d-fe2328e3f93d", 00:33:27.475 "numa_id": -1, 00:33:27.475 "assigned_rate_limits": { 00:33:27.475 "rw_ios_per_sec": 0, 00:33:27.475 "rw_mbytes_per_sec": 0, 00:33:27.475 "r_mbytes_per_sec": 0, 00:33:27.475 "w_mbytes_per_sec": 0 00:33:27.475 }, 00:33:27.475 "claimed": true, 00:33:27.475 "claim_type": "read_many_write_one", 00:33:27.475 "zoned": false, 00:33:27.475 "supported_io_types": { 00:33:27.475 "read": true, 00:33:27.475 "write": true, 00:33:27.475 "unmap": true, 00:33:27.475 "flush": true, 00:33:27.475 "reset": true, 00:33:27.475 "nvme_admin": true, 00:33:27.475 "nvme_io": true, 00:33:27.475 "nvme_io_md": false, 00:33:27.475 "write_zeroes": true, 00:33:27.475 "zcopy": false, 00:33:27.475 "get_zone_info": false, 00:33:27.475 "zone_management": false, 00:33:27.475 "zone_append": false, 00:33:27.475 "compare": true, 00:33:27.475 "compare_and_write": false, 00:33:27.475 "abort": true, 00:33:27.475 "seek_hole": false, 00:33:27.475 "seek_data": false, 00:33:27.475 "copy": true, 00:33:27.475 "nvme_iov_md": false 00:33:27.475 }, 00:33:27.475 "driver_specific": { 00:33:27.475 "nvme": [ 00:33:27.475 { 00:33:27.475 "pci_address": "0000:00:11.0", 00:33:27.475 "trid": { 00:33:27.475 "trtype": "PCIe", 00:33:27.475 "traddr": "0000:00:11.0" 00:33:27.475 }, 00:33:27.475 "ctrlr_data": { 00:33:27.475 "cntlid": 0, 00:33:27.475 "vendor_id": "0x1b36", 00:33:27.475 "model_number": "QEMU NVMe Ctrl", 00:33:27.475 "serial_number": "12341", 00:33:27.475 "firmware_revision": "8.0.0", 00:33:27.475 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:27.475 "oacs": { 00:33:27.475 "security": 0, 00:33:27.475 "format": 1, 00:33:27.475 "firmware": 0, 00:33:27.475 "ns_manage": 1 00:33:27.475 }, 00:33:27.475 "multi_ctrlr": false, 00:33:27.475 "ana_reporting": false 00:33:27.475 }, 00:33:27.475 "vs": { 00:33:27.475 "nvme_version": "1.4" 00:33:27.475 }, 00:33:27.475 "ns_data": { 00:33:27.475 "id": 1, 00:33:27.475 "can_share": false 00:33:27.475 } 00:33:27.475 } 00:33:27.475 ], 00:33:27.475 "mp_policy": "active_passive" 00:33:27.475 } 00:33:27.475 } 00:33:27.475 ]' 00:33:27.475 19:00:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:27.475 19:00:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:33:27.475 19:00:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:27.475 19:00:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:33:27.475 19:00:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:33:27.475 19:00:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:33:27.475 19:00:56 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:33:27.475 19:00:56 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:27.475 19:00:56 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:33:27.475 19:00:56 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:27.475 19:00:56 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:27.734 19:00:56 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=427e7d37-7acd-47df-8156-30d75c7c5066 00:33:27.734 19:00:56 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:33:27.734 19:00:56 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 427e7d37-7acd-47df-8156-30d75c7c5066 00:33:27.992 19:00:56 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:28.251 19:00:56 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=ea1f3826-5211-4107-9e54-ceb0ca22b05c 00:33:28.251 19:00:56 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ea1f3826-5211-4107-9e54-ceb0ca22b05c 00:33:28.510 19:00:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:28.510 19:00:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:33:28.510 19:00:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:28.510 19:00:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:33:28.510 19:00:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:28.510 19:00:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:28.510 19:00:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:33:28.510 19:00:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:28.510 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:28.510 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:28.510 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:33:28.510 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:33:28.510 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:28.510 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:28.510 { 00:33:28.510 "name": "c51e69ad-1898-4dea-a3ba-4d733946f052", 00:33:28.510 "aliases": [ 00:33:28.510 "lvs/nvme0n1p0" 00:33:28.510 ], 00:33:28.510 "product_name": "Logical Volume", 00:33:28.510 "block_size": 4096, 00:33:28.510 "num_blocks": 26476544, 00:33:28.510 "uuid": "c51e69ad-1898-4dea-a3ba-4d733946f052", 00:33:28.510 "assigned_rate_limits": { 00:33:28.510 "rw_ios_per_sec": 0, 00:33:28.510 "rw_mbytes_per_sec": 0, 00:33:28.510 "r_mbytes_per_sec": 0, 00:33:28.510 "w_mbytes_per_sec": 0 00:33:28.510 }, 00:33:28.510 "claimed": false, 00:33:28.510 "zoned": false, 00:33:28.510 "supported_io_types": { 00:33:28.510 "read": true, 00:33:28.510 "write": true, 00:33:28.510 "unmap": true, 00:33:28.510 "flush": false, 00:33:28.510 "reset": true, 00:33:28.510 "nvme_admin": false, 00:33:28.510 "nvme_io": false, 00:33:28.510 "nvme_io_md": false, 00:33:28.510 "write_zeroes": true, 00:33:28.510 "zcopy": false, 00:33:28.510 "get_zone_info": false, 00:33:28.510 "zone_management": false, 00:33:28.510 "zone_append": false, 00:33:28.510 "compare": false, 00:33:28.510 "compare_and_write": false, 00:33:28.510 "abort": false, 00:33:28.510 "seek_hole": true, 00:33:28.510 "seek_data": true, 00:33:28.510 "copy": false, 00:33:28.510 "nvme_iov_md": false 00:33:28.510 }, 00:33:28.510 "driver_specific": { 00:33:28.510 "lvol": { 00:33:28.510 "lvol_store_uuid": "ea1f3826-5211-4107-9e54-ceb0ca22b05c", 00:33:28.510 "base_bdev": "nvme0n1", 00:33:28.510 "thin_provision": true, 00:33:28.510 "num_allocated_clusters": 0, 00:33:28.510 "snapshot": false, 00:33:28.510 "clone": false, 00:33:28.510 "esnap_clone": false 00:33:28.510 } 00:33:28.510 } 00:33:28.510 } 00:33:28.510 ]' 00:33:28.510 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:28.768 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:33:28.768 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:28.768 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:28.768 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:28.768 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:33:28.768 19:00:57 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:33:28.768 19:00:57 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:33:28.768 19:00:57 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:29.028 19:00:57 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:29.028 19:00:57 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:29.028 19:00:57 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:29.028 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:29.028 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:29.028 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:33:29.028 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:33:29.028 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:29.286 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:29.286 { 00:33:29.286 "name": "c51e69ad-1898-4dea-a3ba-4d733946f052", 00:33:29.286 "aliases": [ 00:33:29.286 "lvs/nvme0n1p0" 00:33:29.286 ], 00:33:29.286 "product_name": "Logical Volume", 00:33:29.286 "block_size": 4096, 00:33:29.286 "num_blocks": 26476544, 00:33:29.286 "uuid": "c51e69ad-1898-4dea-a3ba-4d733946f052", 00:33:29.286 "assigned_rate_limits": { 00:33:29.286 "rw_ios_per_sec": 0, 00:33:29.286 "rw_mbytes_per_sec": 0, 00:33:29.286 "r_mbytes_per_sec": 0, 00:33:29.286 "w_mbytes_per_sec": 0 00:33:29.286 }, 00:33:29.286 "claimed": false, 00:33:29.286 "zoned": false, 00:33:29.286 "supported_io_types": { 00:33:29.286 "read": true, 00:33:29.286 "write": true, 00:33:29.286 "unmap": true, 00:33:29.286 "flush": false, 00:33:29.286 "reset": true, 00:33:29.286 "nvme_admin": false, 00:33:29.286 "nvme_io": false, 00:33:29.286 "nvme_io_md": false, 00:33:29.286 "write_zeroes": true, 00:33:29.286 "zcopy": false, 00:33:29.286 "get_zone_info": false, 00:33:29.286 "zone_management": false, 00:33:29.286 "zone_append": false, 00:33:29.286 "compare": false, 00:33:29.286 "compare_and_write": false, 00:33:29.286 "abort": false, 00:33:29.286 "seek_hole": true, 00:33:29.286 "seek_data": true, 00:33:29.286 "copy": false, 00:33:29.286 "nvme_iov_md": false 00:33:29.286 }, 00:33:29.286 "driver_specific": { 00:33:29.286 "lvol": { 00:33:29.286 "lvol_store_uuid": "ea1f3826-5211-4107-9e54-ceb0ca22b05c", 00:33:29.286 "base_bdev": "nvme0n1", 00:33:29.286 "thin_provision": true, 00:33:29.286 "num_allocated_clusters": 0, 00:33:29.286 "snapshot": false, 00:33:29.286 "clone": false, 00:33:29.286 "esnap_clone": false 00:33:29.286 } 00:33:29.286 } 00:33:29.286 } 00:33:29.286 ]' 00:33:29.286 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:29.286 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:33:29.286 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:29.286 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:29.286 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:29.286 19:00:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:33:29.286 19:00:57 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:33:29.286 19:00:57 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:29.545 19:00:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:33:29.545 19:00:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:29.545 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:29.545 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:33:29.545 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:33:29.545 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:33:29.545 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c51e69ad-1898-4dea-a3ba-4d733946f052 00:33:29.803 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:33:29.803 { 00:33:29.803 "name": "c51e69ad-1898-4dea-a3ba-4d733946f052", 00:33:29.803 "aliases": [ 00:33:29.803 "lvs/nvme0n1p0" 00:33:29.803 ], 00:33:29.803 "product_name": "Logical Volume", 00:33:29.803 "block_size": 4096, 00:33:29.803 "num_blocks": 26476544, 00:33:29.803 "uuid": "c51e69ad-1898-4dea-a3ba-4d733946f052", 00:33:29.803 "assigned_rate_limits": { 00:33:29.803 "rw_ios_per_sec": 0, 00:33:29.803 "rw_mbytes_per_sec": 0, 00:33:29.803 "r_mbytes_per_sec": 0, 00:33:29.803 "w_mbytes_per_sec": 0 00:33:29.803 }, 00:33:29.803 "claimed": false, 00:33:29.804 "zoned": false, 00:33:29.804 "supported_io_types": { 00:33:29.804 "read": true, 00:33:29.804 "write": true, 00:33:29.804 "unmap": true, 00:33:29.804 "flush": false, 00:33:29.804 "reset": true, 00:33:29.804 "nvme_admin": false, 00:33:29.804 "nvme_io": false, 00:33:29.804 "nvme_io_md": false, 00:33:29.804 "write_zeroes": true, 00:33:29.804 "zcopy": false, 00:33:29.804 "get_zone_info": false, 00:33:29.804 "zone_management": false, 00:33:29.804 "zone_append": false, 00:33:29.804 "compare": false, 00:33:29.804 "compare_and_write": false, 00:33:29.804 "abort": false, 00:33:29.804 "seek_hole": true, 00:33:29.804 "seek_data": true, 00:33:29.804 "copy": false, 00:33:29.804 "nvme_iov_md": false 00:33:29.804 }, 00:33:29.804 "driver_specific": { 00:33:29.804 "lvol": { 00:33:29.804 "lvol_store_uuid": "ea1f3826-5211-4107-9e54-ceb0ca22b05c", 00:33:29.804 "base_bdev": "nvme0n1", 00:33:29.804 "thin_provision": true, 00:33:29.804 "num_allocated_clusters": 0, 00:33:29.804 "snapshot": false, 00:33:29.804 "clone": false, 00:33:29.804 "esnap_clone": false 00:33:29.804 } 00:33:29.804 } 00:33:29.804 } 00:33:29.804 ]' 00:33:29.804 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:33:29.804 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:33:29.804 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:33:29.804 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:33:29.804 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:33:29.804 19:00:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:33:29.804 19:00:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:33:29.804 19:00:58 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c51e69ad-1898-4dea-a3ba-4d733946f052 --l2p_dram_limit 10' 00:33:29.804 19:00:58 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:33:29.804 19:00:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:33:29.804 19:00:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:33:29.804 19:00:58 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:33:29.804 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:33:29.804 19:00:58 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c51e69ad-1898-4dea-a3ba-4d733946f052 --l2p_dram_limit 10 -c nvc0n1p0 00:33:30.063 [2024-10-08 19:00:58.701116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.063 [2024-10-08 19:00:58.701176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:30.063 [2024-10-08 19:00:58.701197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:30.063 [2024-10-08 19:00:58.701208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.063 [2024-10-08 19:00:58.701273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.063 [2024-10-08 19:00:58.701285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:30.063 [2024-10-08 19:00:58.701299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:30.063 [2024-10-08 19:00:58.701310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.063 [2024-10-08 19:00:58.701342] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:30.063 [2024-10-08 19:00:58.702343] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:30.063 [2024-10-08 19:00:58.702382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.063 [2024-10-08 19:00:58.702394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:30.063 [2024-10-08 19:00:58.702408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:33:30.064 [2024-10-08 19:00:58.702421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.702465] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 88cec975-96d4-4e29-9174-d0217503c41a 00:33:30.064 [2024-10-08 19:00:58.704040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.064 [2024-10-08 19:00:58.704079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:30.064 [2024-10-08 19:00:58.704092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:30.064 [2024-10-08 19:00:58.704105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.711628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.064 [2024-10-08 19:00:58.711845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:30.064 [2024-10-08 19:00:58.711871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.431 ms 00:33:30.064 [2024-10-08 19:00:58.711886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.712025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.064 [2024-10-08 19:00:58.712063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:30.064 [2024-10-08 19:00:58.712078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:33:30.064 [2024-10-08 19:00:58.712100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.712177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.064 [2024-10-08 19:00:58.712196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:30.064 [2024-10-08 19:00:58.712210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:30.064 [2024-10-08 19:00:58.712226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.712257] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:30.064 [2024-10-08 19:00:58.717465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.064 [2024-10-08 19:00:58.717499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:30.064 [2024-10-08 19:00:58.717514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.214 ms 00:33:30.064 [2024-10-08 19:00:58.717541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.717593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.064 [2024-10-08 19:00:58.717605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:30.064 [2024-10-08 19:00:58.717619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:30.064 [2024-10-08 19:00:58.717632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.717680] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:30.064 [2024-10-08 19:00:58.717810] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:30.064 [2024-10-08 19:00:58.717831] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:30.064 [2024-10-08 19:00:58.717845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:30.064 [2024-10-08 19:00:58.717865] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:30.064 [2024-10-08 19:00:58.717877] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:30.064 [2024-10-08 19:00:58.717892] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:30.064 [2024-10-08 19:00:58.717903] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:30.064 [2024-10-08 19:00:58.717926] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:30.064 [2024-10-08 19:00:58.717937] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:30.064 [2024-10-08 19:00:58.717950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.064 [2024-10-08 19:00:58.718017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:30.064 [2024-10-08 19:00:58.718034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:33:30.064 [2024-10-08 19:00:58.718045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.718165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.064 [2024-10-08 19:00:58.718183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:30.064 [2024-10-08 19:00:58.718197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:33:30.064 [2024-10-08 19:00:58.718209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.064 [2024-10-08 19:00:58.718310] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:30.064 [2024-10-08 19:00:58.718324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:30.064 [2024-10-08 19:00:58.718339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:30.064 [2024-10-08 19:00:58.718375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:30.064 [2024-10-08 19:00:58.718413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:30.064 [2024-10-08 19:00:58.718437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:30.064 [2024-10-08 19:00:58.718447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:30.064 [2024-10-08 19:00:58.718460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:30.064 [2024-10-08 19:00:58.718471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:30.064 [2024-10-08 19:00:58.718485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:30.064 [2024-10-08 19:00:58.718495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:30.064 [2024-10-08 19:00:58.718522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:30.064 [2024-10-08 19:00:58.718559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:30.064 [2024-10-08 19:00:58.718596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:30.064 [2024-10-08 19:00:58.718633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:30.064 [2024-10-08 19:00:58.718667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:30.064 [2024-10-08 19:00:58.718708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:30.064 [2024-10-08 19:00:58.718731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:30.064 [2024-10-08 19:00:58.718742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:30.064 [2024-10-08 19:00:58.718755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:30.064 [2024-10-08 19:00:58.718765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:30.064 [2024-10-08 19:00:58.718778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:30.064 [2024-10-08 19:00:58.718789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:30.064 [2024-10-08 19:00:58.718812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:30.064 [2024-10-08 19:00:58.718825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718835] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:30.064 [2024-10-08 19:00:58.718848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:30.064 [2024-10-08 19:00:58.718863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:30.064 [2024-10-08 19:00:58.718889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:30.064 [2024-10-08 19:00:58.718906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:30.064 [2024-10-08 19:00:58.718917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:30.064 [2024-10-08 19:00:58.718930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:30.064 [2024-10-08 19:00:58.718941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:30.064 [2024-10-08 19:00:58.718954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:30.064 [2024-10-08 19:00:58.718971] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:30.064 [2024-10-08 19:00:58.719000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:30.064 [2024-10-08 19:00:58.719014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:30.064 [2024-10-08 19:00:58.719029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:30.064 [2024-10-08 19:00:58.719041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:30.064 [2024-10-08 19:00:58.719055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:30.064 [2024-10-08 19:00:58.719066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:30.064 [2024-10-08 19:00:58.719080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:30.064 [2024-10-08 19:00:58.719092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:30.064 [2024-10-08 19:00:58.719107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:30.064 [2024-10-08 19:00:58.719118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:30.065 [2024-10-08 19:00:58.719135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:30.065 [2024-10-08 19:00:58.719147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:30.065 [2024-10-08 19:00:58.719161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:30.065 [2024-10-08 19:00:58.719173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:30.065 [2024-10-08 19:00:58.719188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:30.065 [2024-10-08 19:00:58.719199] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:30.065 [2024-10-08 19:00:58.719215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:30.065 [2024-10-08 19:00:58.719228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:30.065 [2024-10-08 19:00:58.719245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:30.065 [2024-10-08 19:00:58.719256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:30.065 [2024-10-08 19:00:58.719271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:30.065 [2024-10-08 19:00:58.719283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:30.065 [2024-10-08 19:00:58.719297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:30.065 [2024-10-08 19:00:58.719309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:33:30.065 [2024-10-08 19:00:58.719323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:30.065 [2024-10-08 19:00:58.719373] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:30.065 [2024-10-08 19:00:58.719393] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:35.388 [2024-10-08 19:01:03.169941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.388 [2024-10-08 19:01:03.170011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:35.388 [2024-10-08 19:01:03.170031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4450.540 ms 00:33:35.388 [2024-10-08 19:01:03.170045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.388 [2024-10-08 19:01:03.209783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.388 [2024-10-08 19:01:03.209845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:35.388 [2024-10-08 19:01:03.209863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.404 ms 00:33:35.388 [2024-10-08 19:01:03.209876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.388 [2024-10-08 19:01:03.210071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.388 [2024-10-08 19:01:03.210090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:35.388 [2024-10-08 19:01:03.210102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:35.388 [2024-10-08 19:01:03.210118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.388 [2024-10-08 19:01:03.267221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.388 [2024-10-08 19:01:03.267284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:35.388 [2024-10-08 19:01:03.267308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.048 ms 00:33:35.388 [2024-10-08 19:01:03.267325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.388 [2024-10-08 19:01:03.267381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.388 [2024-10-08 19:01:03.267399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:35.388 [2024-10-08 19:01:03.267414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:35.389 [2024-10-08 19:01:03.267452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.268016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.268042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:35.389 [2024-10-08 19:01:03.268056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:33:35.389 [2024-10-08 19:01:03.268077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.268207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.268224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:35.389 [2024-10-08 19:01:03.268238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:33:35.389 [2024-10-08 19:01:03.268258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.289552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.289810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:35.389 [2024-10-08 19:01:03.289835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.268 ms 00:33:35.389 [2024-10-08 19:01:03.289849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.302680] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:35.389 [2024-10-08 19:01:03.305948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.305992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:35.389 [2024-10-08 19:01:03.306013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.928 ms 00:33:35.389 [2024-10-08 19:01:03.306025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.440587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.440654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:35.389 [2024-10-08 19:01:03.440678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 134.510 ms 00:33:35.389 [2024-10-08 19:01:03.440689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.440884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.440898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:35.389 [2024-10-08 19:01:03.440915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:33:35.389 [2024-10-08 19:01:03.440926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.479770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.479824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:35.389 [2024-10-08 19:01:03.479842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.767 ms 00:33:35.389 [2024-10-08 19:01:03.479854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.517722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.517767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:35.389 [2024-10-08 19:01:03.517786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.818 ms 00:33:35.389 [2024-10-08 19:01:03.517796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.518568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.518595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:35.389 [2024-10-08 19:01:03.518611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:33:35.389 [2024-10-08 19:01:03.518622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.633715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.633973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:35.389 [2024-10-08 19:01:03.634033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.022 ms 00:33:35.389 [2024-10-08 19:01:03.634046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.676643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.676885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:35.389 [2024-10-08 19:01:03.676920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.463 ms 00:33:35.389 [2024-10-08 19:01:03.676932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.718400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.718593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:35.389 [2024-10-08 19:01:03.718625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.392 ms 00:33:35.389 [2024-10-08 19:01:03.718638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.760459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.760646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:35.389 [2024-10-08 19:01:03.760694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.767 ms 00:33:35.389 [2024-10-08 19:01:03.760707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.760783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.760799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:35.389 [2024-10-08 19:01:03.760822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:35.389 [2024-10-08 19:01:03.760834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.760973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.389 [2024-10-08 19:01:03.760989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:35.389 [2024-10-08 19:01:03.761005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:35.389 [2024-10-08 19:01:03.761017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.389 [2024-10-08 19:01:03.762166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5060.496 ms, result 0 00:33:35.389 { 00:33:35.389 "name": "ftl0", 00:33:35.389 "uuid": "88cec975-96d4-4e29-9174-d0217503c41a" 00:33:35.389 } 00:33:35.389 19:01:03 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:33:35.389 19:01:03 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:35.389 19:01:04 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:33:35.389 19:01:04 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:33:35.648 [2024-10-08 19:01:04.297543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.648 [2024-10-08 19:01:04.297606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:35.648 [2024-10-08 19:01:04.297625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:35.648 [2024-10-08 19:01:04.297639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.648 [2024-10-08 19:01:04.297669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:35.649 [2024-10-08 19:01:04.302196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.649 [2024-10-08 19:01:04.302396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:35.649 [2024-10-08 19:01:04.302441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.501 ms 00:33:35.649 [2024-10-08 19:01:04.302453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.649 [2024-10-08 19:01:04.302767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.649 [2024-10-08 19:01:04.302791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:35.649 [2024-10-08 19:01:04.302807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:33:35.649 [2024-10-08 19:01:04.302819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.649 [2024-10-08 19:01:04.305690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.649 [2024-10-08 19:01:04.305817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:35.649 [2024-10-08 19:01:04.305842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:33:35.649 [2024-10-08 19:01:04.305856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.649 [2024-10-08 19:01:04.310996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.649 [2024-10-08 19:01:04.311028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:35.649 [2024-10-08 19:01:04.311043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.108 ms 00:33:35.649 [2024-10-08 19:01:04.311053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.649 [2024-10-08 19:01:04.349240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.649 [2024-10-08 19:01:04.349282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:35.649 [2024-10-08 19:01:04.349299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.128 ms 00:33:35.649 [2024-10-08 19:01:04.349310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.649 [2024-10-08 19:01:04.372006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.649 [2024-10-08 19:01:04.372185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:35.649 [2024-10-08 19:01:04.372216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.641 ms 00:33:35.649 [2024-10-08 19:01:04.372228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.649 [2024-10-08 19:01:04.372398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.649 [2024-10-08 19:01:04.372413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:35.649 [2024-10-08 19:01:04.372429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:33:35.649 [2024-10-08 19:01:04.372440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.909 [2024-10-08 19:01:04.410628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.909 [2024-10-08 19:01:04.410669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:35.909 [2024-10-08 19:01:04.410687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.161 ms 00:33:35.909 [2024-10-08 19:01:04.410697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.909 [2024-10-08 19:01:04.446835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.909 [2024-10-08 19:01:04.447028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:35.909 [2024-10-08 19:01:04.447057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.090 ms 00:33:35.909 [2024-10-08 19:01:04.447069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.909 [2024-10-08 19:01:04.483513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.909 [2024-10-08 19:01:04.483553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:35.909 [2024-10-08 19:01:04.483570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.390 ms 00:33:35.909 [2024-10-08 19:01:04.483580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.909 [2024-10-08 19:01:04.519747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.909 [2024-10-08 19:01:04.519785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:35.909 [2024-10-08 19:01:04.519802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.061 ms 00:33:35.909 [2024-10-08 19:01:04.519812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.909 [2024-10-08 19:01:04.519855] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:35.909 [2024-10-08 19:01:04.519872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.519888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.519900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.519914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.519925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.519939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.519950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.519983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.519994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:35.909 [2024-10-08 19:01:04.520257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.520989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:35.910 [2024-10-08 19:01:04.521175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:35.910 [2024-10-08 19:01:04.521192] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88cec975-96d4-4e29-9174-d0217503c41a 00:33:35.910 [2024-10-08 19:01:04.521203] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:35.910 [2024-10-08 19:01:04.521218] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:35.910 [2024-10-08 19:01:04.521228] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:35.910 [2024-10-08 19:01:04.521242] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:35.910 [2024-10-08 19:01:04.521252] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:35.910 [2024-10-08 19:01:04.521268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:35.910 [2024-10-08 19:01:04.521279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:35.910 [2024-10-08 19:01:04.521290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:35.910 [2024-10-08 19:01:04.521299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:35.910 [2024-10-08 19:01:04.521312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.910 [2024-10-08 19:01:04.521322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:35.910 [2024-10-08 19:01:04.521336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:33:35.910 [2024-10-08 19:01:04.521346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.910 [2024-10-08 19:01:04.542310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.910 [2024-10-08 19:01:04.542455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:35.910 [2024-10-08 19:01:04.542481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.904 ms 00:33:35.910 [2024-10-08 19:01:04.542496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.910 [2024-10-08 19:01:04.543038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.910 [2024-10-08 19:01:04.543054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:35.910 [2024-10-08 19:01:04.543067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:33:35.910 [2024-10-08 19:01:04.543077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.910 [2024-10-08 19:01:04.607534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.910 [2024-10-08 19:01:04.607596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:35.910 [2024-10-08 19:01:04.607618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.910 [2024-10-08 19:01:04.607633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.910 [2024-10-08 19:01:04.607712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.910 [2024-10-08 19:01:04.607725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:35.910 [2024-10-08 19:01:04.607740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.910 [2024-10-08 19:01:04.607752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.911 [2024-10-08 19:01:04.607882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.911 [2024-10-08 19:01:04.607898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:35.911 [2024-10-08 19:01:04.607913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.911 [2024-10-08 19:01:04.607925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.911 [2024-10-08 19:01:04.607977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:35.911 [2024-10-08 19:01:04.607991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:35.911 [2024-10-08 19:01:04.608006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:35.911 [2024-10-08 19:01:04.608017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.741470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:36.170 [2024-10-08 19:01:04.741536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:36.170 [2024-10-08 19:01:04.741556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:36.170 [2024-10-08 19:01:04.741572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.848933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:36.170 [2024-10-08 19:01:04.849025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:36.170 [2024-10-08 19:01:04.849044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:36.170 [2024-10-08 19:01:04.849057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.849186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:36.170 [2024-10-08 19:01:04.849201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:36.170 [2024-10-08 19:01:04.849216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:36.170 [2024-10-08 19:01:04.849227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.849317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:36.170 [2024-10-08 19:01:04.849331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:36.170 [2024-10-08 19:01:04.849346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:36.170 [2024-10-08 19:01:04.849359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.849501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:36.170 [2024-10-08 19:01:04.849516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:36.170 [2024-10-08 19:01:04.849530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:36.170 [2024-10-08 19:01:04.849542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.849585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:36.170 [2024-10-08 19:01:04.849602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:36.170 [2024-10-08 19:01:04.849616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:36.170 [2024-10-08 19:01:04.849627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.849670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:36.170 [2024-10-08 19:01:04.849683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:36.170 [2024-10-08 19:01:04.849696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:36.170 [2024-10-08 19:01:04.849708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.849762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:36.170 [2024-10-08 19:01:04.849778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:36.170 [2024-10-08 19:01:04.849792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:36.170 [2024-10-08 19:01:04.849804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:36.170 [2024-10-08 19:01:04.849944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.364 ms, result 0 00:33:36.170 true 00:33:36.170 19:01:04 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77397 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 77397 ']' 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 77397 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77397 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77397' 00:33:36.170 killing process with pid 77397 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 77397 00:33:36.170 19:01:04 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 77397 00:33:42.737 19:01:10 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:33:46.921 262144+0 records in 00:33:46.921 262144+0 records out 00:33:46.921 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.29602 s, 203 MB/s 00:33:46.921 19:01:15 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:48.840 19:01:17 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:48.840 [2024-10-08 19:01:17.489507] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:33:48.840 [2024-10-08 19:01:17.489649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77670 ] 00:33:49.098 [2024-10-08 19:01:17.666431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.358 [2024-10-08 19:01:17.943550] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.617 [2024-10-08 19:01:18.333596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:49.617 [2024-10-08 19:01:18.333882] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:49.876 [2024-10-08 19:01:18.506229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.506293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:49.876 [2024-10-08 19:01:18.506311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:49.876 [2024-10-08 19:01:18.506348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.506418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.506433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:49.876 [2024-10-08 19:01:18.506446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:49.876 [2024-10-08 19:01:18.506458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.506485] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:49.876 [2024-10-08 19:01:18.507595] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:49.876 [2024-10-08 19:01:18.507628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.507641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:49.876 [2024-10-08 19:01:18.507654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:33:49.876 [2024-10-08 19:01:18.507665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.509301] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:49.876 [2024-10-08 19:01:18.529522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.529578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:49.876 [2024-10-08 19:01:18.529596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.221 ms 00:33:49.876 [2024-10-08 19:01:18.529608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.529687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.529703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:49.876 [2024-10-08 19:01:18.529716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:49.876 [2024-10-08 19:01:18.529729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.536899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.536936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:49.876 [2024-10-08 19:01:18.536951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.083 ms 00:33:49.876 [2024-10-08 19:01:18.536976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.537065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.537082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:49.876 [2024-10-08 19:01:18.537096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:49.876 [2024-10-08 19:01:18.537108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.537163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.537177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:49.876 [2024-10-08 19:01:18.537190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:49.876 [2024-10-08 19:01:18.537202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.537232] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:49.876 [2024-10-08 19:01:18.542424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.542591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:49.876 [2024-10-08 19:01:18.542785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.198 ms 00:33:49.876 [2024-10-08 19:01:18.542828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.542893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.543010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:49.876 [2024-10-08 19:01:18.543055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:49.876 [2024-10-08 19:01:18.543095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.543243] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:49.876 [2024-10-08 19:01:18.543383] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:49.876 [2024-10-08 19:01:18.543544] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:49.876 [2024-10-08 19:01:18.543738] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:49.876 [2024-10-08 19:01:18.543879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:49.876 [2024-10-08 19:01:18.543987] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:49.876 [2024-10-08 19:01:18.544007] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:49.876 [2024-10-08 19:01:18.544031] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:49.876 [2024-10-08 19:01:18.544045] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:49.876 [2024-10-08 19:01:18.544059] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:49.876 [2024-10-08 19:01:18.544071] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:49.876 [2024-10-08 19:01:18.544083] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:49.876 [2024-10-08 19:01:18.544094] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:49.876 [2024-10-08 19:01:18.544109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.544121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:49.876 [2024-10-08 19:01:18.544135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:33:49.876 [2024-10-08 19:01:18.544147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.876 [2024-10-08 19:01:18.544237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.876 [2024-10-08 19:01:18.544255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:49.876 [2024-10-08 19:01:18.544268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:49.877 [2024-10-08 19:01:18.544280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.877 [2024-10-08 19:01:18.544385] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:49.877 [2024-10-08 19:01:18.544402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:49.877 [2024-10-08 19:01:18.544414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:49.877 [2024-10-08 19:01:18.544426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:49.877 [2024-10-08 19:01:18.544450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:49.877 [2024-10-08 19:01:18.544473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:49.877 [2024-10-08 19:01:18.544484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:49.877 [2024-10-08 19:01:18.544506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:49.877 [2024-10-08 19:01:18.544517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:49.877 [2024-10-08 19:01:18.544528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:49.877 [2024-10-08 19:01:18.544551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:49.877 [2024-10-08 19:01:18.544563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:49.877 [2024-10-08 19:01:18.544574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:49.877 [2024-10-08 19:01:18.544597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:49.877 [2024-10-08 19:01:18.544608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:49.877 [2024-10-08 19:01:18.544631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:49.877 [2024-10-08 19:01:18.544656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:49.877 [2024-10-08 19:01:18.544668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:49.877 [2024-10-08 19:01:18.544690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:49.877 [2024-10-08 19:01:18.544702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:49.877 [2024-10-08 19:01:18.544724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:49.877 [2024-10-08 19:01:18.544735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:49.877 [2024-10-08 19:01:18.544757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:49.877 [2024-10-08 19:01:18.544769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:49.877 [2024-10-08 19:01:18.544791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:49.877 [2024-10-08 19:01:18.544802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:49.877 [2024-10-08 19:01:18.544813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:49.877 [2024-10-08 19:01:18.544824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:49.877 [2024-10-08 19:01:18.544836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:49.877 [2024-10-08 19:01:18.544846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:49.877 [2024-10-08 19:01:18.544869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:49.877 [2024-10-08 19:01:18.544880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544891] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:49.877 [2024-10-08 19:01:18.544903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:49.877 [2024-10-08 19:01:18.544919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:49.877 [2024-10-08 19:01:18.544931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:49.877 [2024-10-08 19:01:18.544942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:49.877 [2024-10-08 19:01:18.544966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:49.877 [2024-10-08 19:01:18.544979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:49.877 [2024-10-08 19:01:18.544991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:49.877 [2024-10-08 19:01:18.545002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:49.877 [2024-10-08 19:01:18.545013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:49.877 [2024-10-08 19:01:18.545026] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:49.877 [2024-10-08 19:01:18.545041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:49.877 [2024-10-08 19:01:18.545056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:49.877 [2024-10-08 19:01:18.545069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:49.877 [2024-10-08 19:01:18.545082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:49.877 [2024-10-08 19:01:18.545095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:49.877 [2024-10-08 19:01:18.545107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:49.877 [2024-10-08 19:01:18.545119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:49.877 [2024-10-08 19:01:18.545131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:49.877 [2024-10-08 19:01:18.545143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:49.877 [2024-10-08 19:01:18.545156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:49.877 [2024-10-08 19:01:18.545168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:49.877 [2024-10-08 19:01:18.545180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:49.877 [2024-10-08 19:01:18.545192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:49.877 [2024-10-08 19:01:18.545204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:49.877 [2024-10-08 19:01:18.545217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:49.877 [2024-10-08 19:01:18.545229] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:49.877 [2024-10-08 19:01:18.545242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:49.877 [2024-10-08 19:01:18.545255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:49.877 [2024-10-08 19:01:18.545267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:49.877 [2024-10-08 19:01:18.545280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:49.877 [2024-10-08 19:01:18.545292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:49.877 [2024-10-08 19:01:18.545305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.877 [2024-10-08 19:01:18.545317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:49.877 [2024-10-08 19:01:18.545328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:33:49.877 [2024-10-08 19:01:18.545340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.877 [2024-10-08 19:01:18.598564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.877 [2024-10-08 19:01:18.598622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:49.877 [2024-10-08 19:01:18.598656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.163 ms 00:33:49.877 [2024-10-08 19:01:18.598669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:49.877 [2024-10-08 19:01:18.598783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:49.877 [2024-10-08 19:01:18.598797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:49.877 [2024-10-08 19:01:18.598811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:49.877 [2024-10-08 19:01:18.598823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.649237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.649299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:50.137 [2024-10-08 19:01:18.649325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.310 ms 00:33:50.137 [2024-10-08 19:01:18.649338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.649404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.649417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:50.137 [2024-10-08 19:01:18.649430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:50.137 [2024-10-08 19:01:18.649442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.650190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.650253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:50.137 [2024-10-08 19:01:18.650295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:33:50.137 [2024-10-08 19:01:18.650356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.650537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.650614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:50.137 [2024-10-08 19:01:18.650655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:33:50.137 [2024-10-08 19:01:18.650694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.673788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.673992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:50.137 [2024-10-08 19:01:18.674110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.979 ms 00:33:50.137 [2024-10-08 19:01:18.674159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.694914] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:50.137 [2024-10-08 19:01:18.695144] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:50.137 [2024-10-08 19:01:18.695199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.695214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:50.137 [2024-10-08 19:01:18.695230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.810 ms 00:33:50.137 [2024-10-08 19:01:18.695244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.728085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.728142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:50.137 [2024-10-08 19:01:18.728161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.790 ms 00:33:50.137 [2024-10-08 19:01:18.728175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.749077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.749124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:50.137 [2024-10-08 19:01:18.749141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.841 ms 00:33:50.137 [2024-10-08 19:01:18.749153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.768922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.769121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:50.137 [2024-10-08 19:01:18.769150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.702 ms 00:33:50.137 [2024-10-08 19:01:18.769164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.770089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.770128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:50.137 [2024-10-08 19:01:18.770144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:33:50.137 [2024-10-08 19:01:18.770157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.869006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.869273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:50.137 [2024-10-08 19:01:18.869303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.818 ms 00:33:50.137 [2024-10-08 19:01:18.869318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.881687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:50.137 [2024-10-08 19:01:18.885091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.885128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:50.137 [2024-10-08 19:01:18.885157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.672 ms 00:33:50.137 [2024-10-08 19:01:18.885170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.885315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.885331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:50.137 [2024-10-08 19:01:18.885345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:50.137 [2024-10-08 19:01:18.885357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.885467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.885488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:50.137 [2024-10-08 19:01:18.885503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:50.137 [2024-10-08 19:01:18.885515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.885547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.885566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:50.137 [2024-10-08 19:01:18.885580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:50.137 [2024-10-08 19:01:18.885592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.137 [2024-10-08 19:01:18.885632] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:50.137 [2024-10-08 19:01:18.885646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.137 [2024-10-08 19:01:18.885658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:50.137 [2024-10-08 19:01:18.885671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:50.137 [2024-10-08 19:01:18.885684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.396 [2024-10-08 19:01:18.927723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.396 [2024-10-08 19:01:18.927782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:50.396 [2024-10-08 19:01:18.927803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.008 ms 00:33:50.396 [2024-10-08 19:01:18.927818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.396 [2024-10-08 19:01:18.927921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.396 [2024-10-08 19:01:18.927939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:50.396 [2024-10-08 19:01:18.927954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:33:50.396 [2024-10-08 19:01:18.927980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.396 [2024-10-08 19:01:18.929324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 422.490 ms, result 0 00:33:51.332  [2024-10-08T19:01:21.025Z] Copying: 30/1024 [MB] (30 MBps) [2024-10-08T19:01:21.960Z] Copying: 61/1024 [MB] (31 MBps) [2024-10-08T19:01:23.335Z] Copying: 93/1024 [MB] (31 MBps) [2024-10-08T19:01:24.270Z] Copying: 123/1024 [MB] (30 MBps) [2024-10-08T19:01:25.206Z] Copying: 158/1024 [MB] (35 MBps) [2024-10-08T19:01:26.140Z] Copying: 193/1024 [MB] (35 MBps) [2024-10-08T19:01:27.072Z] Copying: 228/1024 [MB] (34 MBps) [2024-10-08T19:01:28.008Z] Copying: 262/1024 [MB] (33 MBps) [2024-10-08T19:01:29.011Z] Copying: 294/1024 [MB] (32 MBps) [2024-10-08T19:01:29.946Z] Copying: 324/1024 [MB] (30 MBps) [2024-10-08T19:01:31.323Z] Copying: 356/1024 [MB] (32 MBps) [2024-10-08T19:01:32.259Z] Copying: 389/1024 [MB] (32 MBps) [2024-10-08T19:01:33.194Z] Copying: 421/1024 [MB] (31 MBps) [2024-10-08T19:01:34.130Z] Copying: 452/1024 [MB] (31 MBps) [2024-10-08T19:01:35.065Z] Copying: 482/1024 [MB] (30 MBps) [2024-10-08T19:01:36.001Z] Copying: 514/1024 [MB] (31 MBps) [2024-10-08T19:01:37.376Z] Copying: 545/1024 [MB] (30 MBps) [2024-10-08T19:01:37.943Z] Copying: 576/1024 [MB] (31 MBps) [2024-10-08T19:01:39.328Z] Copying: 608/1024 [MB] (31 MBps) [2024-10-08T19:01:39.963Z] Copying: 639/1024 [MB] (30 MBps) [2024-10-08T19:01:41.337Z] Copying: 669/1024 [MB] (30 MBps) [2024-10-08T19:01:42.272Z] Copying: 699/1024 [MB] (29 MBps) [2024-10-08T19:01:43.208Z] Copying: 730/1024 [MB] (31 MBps) [2024-10-08T19:01:44.143Z] Copying: 761/1024 [MB] (30 MBps) [2024-10-08T19:01:45.077Z] Copying: 792/1024 [MB] (31 MBps) [2024-10-08T19:01:46.009Z] Copying: 822/1024 [MB] (30 MBps) [2024-10-08T19:01:47.378Z] Copying: 854/1024 [MB] (31 MBps) [2024-10-08T19:01:47.944Z] Copying: 885/1024 [MB] (30 MBps) [2024-10-08T19:01:49.320Z] Copying: 915/1024 [MB] (30 MBps) [2024-10-08T19:01:50.256Z] Copying: 947/1024 [MB] (31 MBps) [2024-10-08T19:01:51.309Z] Copying: 977/1024 [MB] (30 MBps) [2024-10-08T19:01:51.567Z] Copying: 1007/1024 [MB] (30 MBps) [2024-10-08T19:01:51.567Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-10-08 19:01:51.468134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.811 [2024-10-08 19:01:51.468190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:22.811 [2024-10-08 19:01:51.468210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:22.811 [2024-10-08 19:01:51.468222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.811 [2024-10-08 19:01:51.468258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:22.811 [2024-10-08 19:01:51.472604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.811 [2024-10-08 19:01:51.472640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:22.811 [2024-10-08 19:01:51.472653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.326 ms 00:34:22.811 [2024-10-08 19:01:51.472663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.811 [2024-10-08 19:01:51.474311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.811 [2024-10-08 19:01:51.474354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:22.811 [2024-10-08 19:01:51.474368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.624 ms 00:34:22.811 [2024-10-08 19:01:51.474379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.811 [2024-10-08 19:01:51.489238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.811 [2024-10-08 19:01:51.489295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:22.811 [2024-10-08 19:01:51.489310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.839 ms 00:34:22.811 [2024-10-08 19:01:51.489321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.811 [2024-10-08 19:01:51.494503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.811 [2024-10-08 19:01:51.494536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:22.811 [2024-10-08 19:01:51.494549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.146 ms 00:34:22.811 [2024-10-08 19:01:51.494559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.811 [2024-10-08 19:01:51.533307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.811 [2024-10-08 19:01:51.533363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:22.811 [2024-10-08 19:01:51.533379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.673 ms 00:34:22.811 [2024-10-08 19:01:51.533405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.811 [2024-10-08 19:01:51.554134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.811 [2024-10-08 19:01:51.554185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:22.811 [2024-10-08 19:01:51.554217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.683 ms 00:34:22.811 [2024-10-08 19:01:51.554227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:22.811 [2024-10-08 19:01:51.554361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:22.811 [2024-10-08 19:01:51.554375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:22.811 [2024-10-08 19:01:51.554386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:34:22.811 [2024-10-08 19:01:51.554397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.070 [2024-10-08 19:01:51.591149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.070 [2024-10-08 19:01:51.591187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:23.070 [2024-10-08 19:01:51.591201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.734 ms 00:34:23.070 [2024-10-08 19:01:51.591211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.070 [2024-10-08 19:01:51.629071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.070 [2024-10-08 19:01:51.629236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:23.070 [2024-10-08 19:01:51.629257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.820 ms 00:34:23.070 [2024-10-08 19:01:51.629268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.070 [2024-10-08 19:01:51.665595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.070 [2024-10-08 19:01:51.665650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:23.070 [2024-10-08 19:01:51.665664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.246 ms 00:34:23.070 [2024-10-08 19:01:51.665691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.070 [2024-10-08 19:01:51.702364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.070 [2024-10-08 19:01:51.702401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:23.070 [2024-10-08 19:01:51.702415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.591 ms 00:34:23.070 [2024-10-08 19:01:51.702424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.070 [2024-10-08 19:01:51.702462] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:23.070 [2024-10-08 19:01:51.702479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:23.070 [2024-10-08 19:01:51.702621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.702998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:23.071 [2024-10-08 19:01:51.703613] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:23.071 [2024-10-08 19:01:51.703624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88cec975-96d4-4e29-9174-d0217503c41a 00:34:23.071 [2024-10-08 19:01:51.703635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:23.071 [2024-10-08 19:01:51.703645] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:23.071 [2024-10-08 19:01:51.703655] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:23.071 [2024-10-08 19:01:51.703666] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:23.071 [2024-10-08 19:01:51.703675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:23.071 [2024-10-08 19:01:51.703686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:23.071 [2024-10-08 19:01:51.703695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:23.071 [2024-10-08 19:01:51.703704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:23.071 [2024-10-08 19:01:51.703713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:23.072 [2024-10-08 19:01:51.703723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.072 [2024-10-08 19:01:51.703741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:23.072 [2024-10-08 19:01:51.703766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:34:23.072 [2024-10-08 19:01:51.703775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.072 [2024-10-08 19:01:51.723778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.072 [2024-10-08 19:01:51.723812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:23.072 [2024-10-08 19:01:51.723826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.964 ms 00:34:23.072 [2024-10-08 19:01:51.723837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.072 [2024-10-08 19:01:51.724498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:23.072 [2024-10-08 19:01:51.724520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:23.072 [2024-10-08 19:01:51.724532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:34:23.072 [2024-10-08 19:01:51.724543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.072 [2024-10-08 19:01:51.771434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.072 [2024-10-08 19:01:51.771485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:23.072 [2024-10-08 19:01:51.771498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.072 [2024-10-08 19:01:51.771509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.072 [2024-10-08 19:01:51.771573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.072 [2024-10-08 19:01:51.771583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:23.072 [2024-10-08 19:01:51.771594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.072 [2024-10-08 19:01:51.771604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.072 [2024-10-08 19:01:51.771672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.072 [2024-10-08 19:01:51.771685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:23.072 [2024-10-08 19:01:51.771695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.072 [2024-10-08 19:01:51.771705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.072 [2024-10-08 19:01:51.771726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.072 [2024-10-08 19:01:51.771738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:23.072 [2024-10-08 19:01:51.771748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.072 [2024-10-08 19:01:51.771757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:51.899622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.329 [2024-10-08 19:01:51.899817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:23.329 [2024-10-08 19:01:51.899839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.329 [2024-10-08 19:01:51.899850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:52.004336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.329 [2024-10-08 19:01:52.004397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:23.329 [2024-10-08 19:01:52.004413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.329 [2024-10-08 19:01:52.004423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:52.004528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.329 [2024-10-08 19:01:52.004542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:23.329 [2024-10-08 19:01:52.004553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.329 [2024-10-08 19:01:52.004563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:52.004610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.329 [2024-10-08 19:01:52.004622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:23.329 [2024-10-08 19:01:52.004637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.329 [2024-10-08 19:01:52.004648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:52.004750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.329 [2024-10-08 19:01:52.004765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:23.329 [2024-10-08 19:01:52.004775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.329 [2024-10-08 19:01:52.004785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:52.004820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.329 [2024-10-08 19:01:52.004833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:23.329 [2024-10-08 19:01:52.004847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.329 [2024-10-08 19:01:52.004858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:52.004894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.329 [2024-10-08 19:01:52.004905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:23.329 [2024-10-08 19:01:52.004915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.329 [2024-10-08 19:01:52.004925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:52.004992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:23.329 [2024-10-08 19:01:52.005006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:23.329 [2024-10-08 19:01:52.005020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:23.329 [2024-10-08 19:01:52.005030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:23.329 [2024-10-08 19:01:52.005149] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.984 ms, result 0 00:34:24.704 00:34:24.704 00:34:24.704 19:01:53 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:34:24.704 [2024-10-08 19:01:53.370598] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:34:24.704 [2024-10-08 19:01:53.370780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78028 ] 00:34:24.963 [2024-10-08 19:01:53.554766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.222 [2024-10-08 19:01:53.770314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.480 [2024-10-08 19:01:54.130211] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:25.480 [2024-10-08 19:01:54.130279] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:25.739 [2024-10-08 19:01:54.291466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.291525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:25.739 [2024-10-08 19:01:54.291541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:25.739 [2024-10-08 19:01:54.291555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.291607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.291620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:25.739 [2024-10-08 19:01:54.291631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:25.739 [2024-10-08 19:01:54.291641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.291663] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:25.739 [2024-10-08 19:01:54.292683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:25.739 [2024-10-08 19:01:54.292712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.292724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:25.739 [2024-10-08 19:01:54.292736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:34:25.739 [2024-10-08 19:01:54.292748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.294221] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:25.739 [2024-10-08 19:01:54.313744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.313791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:25.739 [2024-10-08 19:01:54.313807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.523 ms 00:34:25.739 [2024-10-08 19:01:54.313834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.313905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.313931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:25.739 [2024-10-08 19:01:54.313943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:25.739 [2024-10-08 19:01:54.313953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.320987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.321023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:25.739 [2024-10-08 19:01:54.321035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:34:25.739 [2024-10-08 19:01:54.321058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.321140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.321154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:25.739 [2024-10-08 19:01:54.321165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:34:25.739 [2024-10-08 19:01:54.321176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.321226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.321238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:25.739 [2024-10-08 19:01:54.321249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:25.739 [2024-10-08 19:01:54.321259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.321285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:25.739 [2024-10-08 19:01:54.326473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.326507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:25.739 [2024-10-08 19:01:54.326519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.195 ms 00:34:25.739 [2024-10-08 19:01:54.326530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.326562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.326573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:25.739 [2024-10-08 19:01:54.326584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:25.739 [2024-10-08 19:01:54.326598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.326655] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:25.739 [2024-10-08 19:01:54.326678] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:25.739 [2024-10-08 19:01:54.326714] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:25.739 [2024-10-08 19:01:54.326732] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:25.739 [2024-10-08 19:01:54.326824] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:25.739 [2024-10-08 19:01:54.326837] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:25.739 [2024-10-08 19:01:54.326854] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:25.739 [2024-10-08 19:01:54.326866] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:25.739 [2024-10-08 19:01:54.326878] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:25.739 [2024-10-08 19:01:54.326890] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:25.739 [2024-10-08 19:01:54.326900] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:25.739 [2024-10-08 19:01:54.326910] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:25.739 [2024-10-08 19:01:54.326920] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:25.739 [2024-10-08 19:01:54.326931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.326941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:25.739 [2024-10-08 19:01:54.326951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:34:25.739 [2024-10-08 19:01:54.326981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.327062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.739 [2024-10-08 19:01:54.327073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:25.739 [2024-10-08 19:01:54.327083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:25.739 [2024-10-08 19:01:54.327093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.739 [2024-10-08 19:01:54.327214] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:25.739 [2024-10-08 19:01:54.327230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:25.739 [2024-10-08 19:01:54.327242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:25.739 [2024-10-08 19:01:54.327253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:25.739 [2024-10-08 19:01:54.327264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:25.739 [2024-10-08 19:01:54.327274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:25.739 [2024-10-08 19:01:54.327284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:25.739 [2024-10-08 19:01:54.327295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:25.739 [2024-10-08 19:01:54.327305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:25.739 [2024-10-08 19:01:54.327315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:25.739 [2024-10-08 19:01:54.327326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:25.739 [2024-10-08 19:01:54.327336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:25.739 [2024-10-08 19:01:54.327346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:25.739 [2024-10-08 19:01:54.327366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:25.739 [2024-10-08 19:01:54.327377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:25.739 [2024-10-08 19:01:54.327386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:25.739 [2024-10-08 19:01:54.327396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:25.739 [2024-10-08 19:01:54.327406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:25.739 [2024-10-08 19:01:54.327416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:25.739 [2024-10-08 19:01:54.327427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:25.739 [2024-10-08 19:01:54.327437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:25.739 [2024-10-08 19:01:54.327456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:25.739 [2024-10-08 19:01:54.327466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:25.739 [2024-10-08 19:01:54.327476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:25.739 [2024-10-08 19:01:54.327486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:25.740 [2024-10-08 19:01:54.327496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:25.740 [2024-10-08 19:01:54.327506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:25.740 [2024-10-08 19:01:54.327516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:25.740 [2024-10-08 19:01:54.327542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:25.740 [2024-10-08 19:01:54.327553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:25.740 [2024-10-08 19:01:54.327563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:25.740 [2024-10-08 19:01:54.327574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:25.740 [2024-10-08 19:01:54.327584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:25.740 [2024-10-08 19:01:54.327595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:25.740 [2024-10-08 19:01:54.327606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:25.740 [2024-10-08 19:01:54.327616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:25.740 [2024-10-08 19:01:54.327627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:25.740 [2024-10-08 19:01:54.327637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:25.740 [2024-10-08 19:01:54.327648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:25.740 [2024-10-08 19:01:54.327659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:25.740 [2024-10-08 19:01:54.327669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:25.740 [2024-10-08 19:01:54.327680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:25.740 [2024-10-08 19:01:54.327692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:25.740 [2024-10-08 19:01:54.327702] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:25.740 [2024-10-08 19:01:54.327718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:25.740 [2024-10-08 19:01:54.327729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:25.740 [2024-10-08 19:01:54.327741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:25.740 [2024-10-08 19:01:54.327752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:25.740 [2024-10-08 19:01:54.327763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:25.740 [2024-10-08 19:01:54.327774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:25.740 [2024-10-08 19:01:54.327785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:25.740 [2024-10-08 19:01:54.327796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:25.740 [2024-10-08 19:01:54.327807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:25.740 [2024-10-08 19:01:54.327819] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:25.740 [2024-10-08 19:01:54.327834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:25.740 [2024-10-08 19:01:54.327846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:25.740 [2024-10-08 19:01:54.327858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:25.740 [2024-10-08 19:01:54.327870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:25.740 [2024-10-08 19:01:54.327882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:25.740 [2024-10-08 19:01:54.327894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:25.740 [2024-10-08 19:01:54.327906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:25.740 [2024-10-08 19:01:54.327918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:25.740 [2024-10-08 19:01:54.327930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:25.740 [2024-10-08 19:01:54.327942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:25.740 [2024-10-08 19:01:54.327954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:25.740 [2024-10-08 19:01:54.327966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:25.740 [2024-10-08 19:01:54.328003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:25.740 [2024-10-08 19:01:54.328015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:25.740 [2024-10-08 19:01:54.328028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:25.740 [2024-10-08 19:01:54.328040] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:25.740 [2024-10-08 19:01:54.328053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:25.740 [2024-10-08 19:01:54.328065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:25.740 [2024-10-08 19:01:54.328077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:25.740 [2024-10-08 19:01:54.328089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:25.740 [2024-10-08 19:01:54.328102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:25.740 [2024-10-08 19:01:54.328114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.328126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:25.740 [2024-10-08 19:01:54.328138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:34:25.740 [2024-10-08 19:01:54.328149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.740 [2024-10-08 19:01:54.381595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.381650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:25.740 [2024-10-08 19:01:54.381666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.387 ms 00:34:25.740 [2024-10-08 19:01:54.381681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.740 [2024-10-08 19:01:54.381781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.381792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:25.740 [2024-10-08 19:01:54.381803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:25.740 [2024-10-08 19:01:54.381813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.740 [2024-10-08 19:01:54.428746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.428971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:25.740 [2024-10-08 19:01:54.428995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.856 ms 00:34:25.740 [2024-10-08 19:01:54.429022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.740 [2024-10-08 19:01:54.429079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.429092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:25.740 [2024-10-08 19:01:54.429115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:25.740 [2024-10-08 19:01:54.429126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.740 [2024-10-08 19:01:54.429655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.429670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:25.740 [2024-10-08 19:01:54.429687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:34:25.740 [2024-10-08 19:01:54.429697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.740 [2024-10-08 19:01:54.429815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.429828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:25.740 [2024-10-08 19:01:54.429839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:34:25.740 [2024-10-08 19:01:54.429849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.740 [2024-10-08 19:01:54.448248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.448289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:25.740 [2024-10-08 19:01:54.448304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.378 ms 00:34:25.740 [2024-10-08 19:01:54.448315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.740 [2024-10-08 19:01:54.467853] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:25.740 [2024-10-08 19:01:54.468026] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:25.740 [2024-10-08 19:01:54.468046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.740 [2024-10-08 19:01:54.468058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:25.740 [2024-10-08 19:01:54.468071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.604 ms 00:34:25.740 [2024-10-08 19:01:54.468081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.999 [2024-10-08 19:01:54.498650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.999 [2024-10-08 19:01:54.498695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:25.999 [2024-10-08 19:01:54.498710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.528 ms 00:34:25.999 [2024-10-08 19:01:54.498737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.999 [2024-10-08 19:01:54.517842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.999 [2024-10-08 19:01:54.517880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:25.999 [2024-10-08 19:01:54.517893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.051 ms 00:34:25.999 [2024-10-08 19:01:54.517903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.999 [2024-10-08 19:01:54.536628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.999 [2024-10-08 19:01:54.536665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:25.999 [2024-10-08 19:01:54.536678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.674 ms 00:34:25.999 [2024-10-08 19:01:54.536705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.999 [2024-10-08 19:01:54.537571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.999 [2024-10-08 19:01:54.537602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:25.999 [2024-10-08 19:01:54.537614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:34:25.999 [2024-10-08 19:01:54.537624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:25.999 [2024-10-08 19:01:54.627801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:25.999 [2024-10-08 19:01:54.627873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:25.999 [2024-10-08 19:01:54.627891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.153 ms 00:34:25.999 [2024-10-08 19:01:54.627903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.000 [2024-10-08 19:01:54.639101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:26.000 [2024-10-08 19:01:54.642189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.000 [2024-10-08 19:01:54.642220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:26.000 [2024-10-08 19:01:54.642256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.210 ms 00:34:26.000 [2024-10-08 19:01:54.642266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.000 [2024-10-08 19:01:54.642378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.000 [2024-10-08 19:01:54.642400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:26.000 [2024-10-08 19:01:54.642411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:26.000 [2024-10-08 19:01:54.642421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.000 [2024-10-08 19:01:54.642514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.000 [2024-10-08 19:01:54.642526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:26.000 [2024-10-08 19:01:54.642537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:26.000 [2024-10-08 19:01:54.642550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.000 [2024-10-08 19:01:54.642573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.000 [2024-10-08 19:01:54.642584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:26.000 [2024-10-08 19:01:54.642595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:26.000 [2024-10-08 19:01:54.642604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.000 [2024-10-08 19:01:54.642635] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:26.000 [2024-10-08 19:01:54.642647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.000 [2024-10-08 19:01:54.642657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:26.000 [2024-10-08 19:01:54.642670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:26.000 [2024-10-08 19:01:54.642680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.000 [2024-10-08 19:01:54.679895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.000 [2024-10-08 19:01:54.679952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:26.000 [2024-10-08 19:01:54.679980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.189 ms 00:34:26.000 [2024-10-08 19:01:54.679991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.000 [2024-10-08 19:01:54.680070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:26.000 [2024-10-08 19:01:54.680083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:26.000 [2024-10-08 19:01:54.680095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:34:26.000 [2024-10-08 19:01:54.680108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:26.000 [2024-10-08 19:01:54.681269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.321 ms, result 0 00:34:27.399  [2024-10-08T19:01:57.090Z] Copying: 32/1024 [MB] (32 MBps) [2024-10-08T19:01:58.025Z] Copying: 64/1024 [MB] (32 MBps) [2024-10-08T19:01:58.964Z] Copying: 97/1024 [MB] (33 MBps) [2024-10-08T19:02:00.340Z] Copying: 128/1024 [MB] (30 MBps) [2024-10-08T19:02:00.908Z] Copying: 158/1024 [MB] (30 MBps) [2024-10-08T19:02:02.328Z] Copying: 187/1024 [MB] (28 MBps) [2024-10-08T19:02:03.262Z] Copying: 217/1024 [MB] (30 MBps) [2024-10-08T19:02:04.197Z] Copying: 248/1024 [MB] (31 MBps) [2024-10-08T19:02:05.132Z] Copying: 281/1024 [MB] (32 MBps) [2024-10-08T19:02:06.069Z] Copying: 312/1024 [MB] (31 MBps) [2024-10-08T19:02:07.002Z] Copying: 345/1024 [MB] (32 MBps) [2024-10-08T19:02:08.002Z] Copying: 378/1024 [MB] (32 MBps) [2024-10-08T19:02:08.949Z] Copying: 409/1024 [MB] (31 MBps) [2024-10-08T19:02:10.325Z] Copying: 443/1024 [MB] (33 MBps) [2024-10-08T19:02:11.262Z] Copying: 474/1024 [MB] (30 MBps) [2024-10-08T19:02:12.200Z] Copying: 508/1024 [MB] (34 MBps) [2024-10-08T19:02:13.136Z] Copying: 539/1024 [MB] (31 MBps) [2024-10-08T19:02:14.071Z] Copying: 571/1024 [MB] (31 MBps) [2024-10-08T19:02:15.052Z] Copying: 603/1024 [MB] (31 MBps) [2024-10-08T19:02:15.987Z] Copying: 632/1024 [MB] (29 MBps) [2024-10-08T19:02:16.924Z] Copying: 663/1024 [MB] (30 MBps) [2024-10-08T19:02:18.301Z] Copying: 693/1024 [MB] (29 MBps) [2024-10-08T19:02:19.234Z] Copying: 724/1024 [MB] (31 MBps) [2024-10-08T19:02:20.170Z] Copying: 751/1024 [MB] (27 MBps) [2024-10-08T19:02:21.104Z] Copying: 782/1024 [MB] (31 MBps) [2024-10-08T19:02:22.071Z] Copying: 814/1024 [MB] (31 MBps) [2024-10-08T19:02:23.005Z] Copying: 845/1024 [MB] (31 MBps) [2024-10-08T19:02:23.938Z] Copying: 878/1024 [MB] (32 MBps) [2024-10-08T19:02:25.313Z] Copying: 910/1024 [MB] (32 MBps) [2024-10-08T19:02:26.247Z] Copying: 942/1024 [MB] (31 MBps) [2024-10-08T19:02:27.182Z] Copying: 974/1024 [MB] (31 MBps) [2024-10-08T19:02:27.749Z] Copying: 1005/1024 [MB] (31 MBps) [2024-10-08T19:02:28.681Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-10-08 19:02:28.655405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.924 [2024-10-08 19:02:28.655547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:59.924 [2024-10-08 19:02:28.655593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:59.924 [2024-10-08 19:02:28.655641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.924 [2024-10-08 19:02:28.655713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:59.924 [2024-10-08 19:02:28.669054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.924 [2024-10-08 19:02:28.671056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:59.924 [2024-10-08 19:02:28.671130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.259 ms 00:34:59.924 [2024-10-08 19:02:28.671167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.924 [2024-10-08 19:02:28.671928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.924 [2024-10-08 19:02:28.672009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:59.924 [2024-10-08 19:02:28.672046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:34:59.924 [2024-10-08 19:02:28.672093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:59.924 [2024-10-08 19:02:28.677665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:59.924 [2024-10-08 19:02:28.677823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:59.924 [2024-10-08 19:02:28.677847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.524 ms 00:34:59.924 [2024-10-08 19:02:28.677858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.183 [2024-10-08 19:02:28.684077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.183 [2024-10-08 19:02:28.684120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:00.183 [2024-10-08 19:02:28.684134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.181 ms 00:35:00.184 [2024-10-08 19:02:28.684145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.184 [2024-10-08 19:02:28.728409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.184 [2024-10-08 19:02:28.728685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:00.184 [2024-10-08 19:02:28.728712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.164 ms 00:35:00.184 [2024-10-08 19:02:28.728723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.184 [2024-10-08 19:02:28.751673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.184 [2024-10-08 19:02:28.751752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:00.184 [2024-10-08 19:02:28.751770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.840 ms 00:35:00.184 [2024-10-08 19:02:28.751782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.184 [2024-10-08 19:02:28.751992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.184 [2024-10-08 19:02:28.752008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:00.184 [2024-10-08 19:02:28.752020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:35:00.184 [2024-10-08 19:02:28.752030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.184 [2024-10-08 19:02:28.792957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.184 [2024-10-08 19:02:28.793053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:00.184 [2024-10-08 19:02:28.793086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.890 ms 00:35:00.184 [2024-10-08 19:02:28.793097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.184 [2024-10-08 19:02:28.834839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.184 [2024-10-08 19:02:28.835120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:00.184 [2024-10-08 19:02:28.835146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.665 ms 00:35:00.184 [2024-10-08 19:02:28.835156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.184 [2024-10-08 19:02:28.876436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.184 [2024-10-08 19:02:28.876721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:00.184 [2024-10-08 19:02:28.876748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.181 ms 00:35:00.184 [2024-10-08 19:02:28.876759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.184 [2024-10-08 19:02:28.917552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.184 [2024-10-08 19:02:28.917625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:00.184 [2024-10-08 19:02:28.917657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.670 ms 00:35:00.184 [2024-10-08 19:02:28.917668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.184 [2024-10-08 19:02:28.917738] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:00.184 [2024-10-08 19:02:28.917758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.917999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:00.184 [2024-10-08 19:02:28.918420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:00.185 [2024-10-08 19:02:28.918885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:00.185 [2024-10-08 19:02:28.918895] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88cec975-96d4-4e29-9174-d0217503c41a 00:35:00.185 [2024-10-08 19:02:28.918906] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:35:00.185 [2024-10-08 19:02:28.918915] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:35:00.185 [2024-10-08 19:02:28.918925] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:35:00.185 [2024-10-08 19:02:28.918935] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:35:00.185 [2024-10-08 19:02:28.918945] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:00.185 [2024-10-08 19:02:28.918969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:00.185 [2024-10-08 19:02:28.918979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:00.185 [2024-10-08 19:02:28.918988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:00.185 [2024-10-08 19:02:28.918997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:00.185 [2024-10-08 19:02:28.919008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.185 [2024-10-08 19:02:28.919030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:00.185 [2024-10-08 19:02:28.919042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:35:00.185 [2024-10-08 19:02:28.919052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.444 [2024-10-08 19:02:28.940494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.444 [2024-10-08 19:02:28.940573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:00.444 [2024-10-08 19:02:28.940588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.389 ms 00:35:00.444 [2024-10-08 19:02:28.940609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.444 [2024-10-08 19:02:28.941202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.444 [2024-10-08 19:02:28.941220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:00.444 [2024-10-08 19:02:28.941231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:35:00.444 [2024-10-08 19:02:28.941242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.444 [2024-10-08 19:02:28.988087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.444 [2024-10-08 19:02:28.988156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:00.444 [2024-10-08 19:02:28.988181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.444 [2024-10-08 19:02:28.988193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.444 [2024-10-08 19:02:28.988271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.444 [2024-10-08 19:02:28.988284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:00.444 [2024-10-08 19:02:28.988296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.444 [2024-10-08 19:02:28.988307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.444 [2024-10-08 19:02:28.988420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.444 [2024-10-08 19:02:28.988436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:00.444 [2024-10-08 19:02:28.988448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.444 [2024-10-08 19:02:28.988465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.444 [2024-10-08 19:02:28.988485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.444 [2024-10-08 19:02:28.988497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:00.444 [2024-10-08 19:02:28.988508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.444 [2024-10-08 19:02:28.988520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.444 [2024-10-08 19:02:29.119667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.444 [2024-10-08 19:02:29.119736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:00.444 [2024-10-08 19:02:29.119759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.444 [2024-10-08 19:02:29.119770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.702 [2024-10-08 19:02:29.224969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.703 [2024-10-08 19:02:29.225026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:00.703 [2024-10-08 19:02:29.225041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.703 [2024-10-08 19:02:29.225052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.703 [2024-10-08 19:02:29.225144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.703 [2024-10-08 19:02:29.225156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:00.703 [2024-10-08 19:02:29.225167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.703 [2024-10-08 19:02:29.225178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.703 [2024-10-08 19:02:29.225225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.703 [2024-10-08 19:02:29.225237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:00.703 [2024-10-08 19:02:29.225247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.703 [2024-10-08 19:02:29.225257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.703 [2024-10-08 19:02:29.225370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.703 [2024-10-08 19:02:29.225385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:00.703 [2024-10-08 19:02:29.225396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.703 [2024-10-08 19:02:29.225406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.703 [2024-10-08 19:02:29.225442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.703 [2024-10-08 19:02:29.225459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:00.703 [2024-10-08 19:02:29.225470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.703 [2024-10-08 19:02:29.225480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.703 [2024-10-08 19:02:29.225518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.703 [2024-10-08 19:02:29.225529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:00.703 [2024-10-08 19:02:29.225540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.703 [2024-10-08 19:02:29.225550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.703 [2024-10-08 19:02:29.225601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:00.703 [2024-10-08 19:02:29.225613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:00.703 [2024-10-08 19:02:29.225623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:00.703 [2024-10-08 19:02:29.225633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.703 [2024-10-08 19:02:29.225765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 570.341 ms, result 0 00:35:02.128 00:35:02.128 00:35:02.128 19:02:30 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:04.029 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:04.029 19:02:32 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:35:04.290 [2024-10-08 19:02:32.809415] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:35:04.290 [2024-10-08 19:02:32.809563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78421 ] 00:35:04.290 [2024-10-08 19:02:32.976929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.549 [2024-10-08 19:02:33.264132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.118 [2024-10-08 19:02:33.639210] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:05.118 [2024-10-08 19:02:33.639292] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:05.118 [2024-10-08 19:02:33.800687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.118 [2024-10-08 19:02:33.800942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:05.118 [2024-10-08 19:02:33.800976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:05.118 [2024-10-08 19:02:33.800997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.118 [2024-10-08 19:02:33.801061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.118 [2024-10-08 19:02:33.801073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:05.118 [2024-10-08 19:02:33.801084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:05.118 [2024-10-08 19:02:33.801094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.118 [2024-10-08 19:02:33.801117] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:05.118 [2024-10-08 19:02:33.802046] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:05.118 [2024-10-08 19:02:33.802067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.118 [2024-10-08 19:02:33.802078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:05.118 [2024-10-08 19:02:33.802090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:35:05.118 [2024-10-08 19:02:33.802100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.118 [2024-10-08 19:02:33.803526] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:05.118 [2024-10-08 19:02:33.822763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.118 [2024-10-08 19:02:33.822808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:05.118 [2024-10-08 19:02:33.822825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.238 ms 00:35:05.118 [2024-10-08 19:02:33.822836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.118 [2024-10-08 19:02:33.822898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.119 [2024-10-08 19:02:33.822911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:05.119 [2024-10-08 19:02:33.822922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:35:05.119 [2024-10-08 19:02:33.822932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.119 [2024-10-08 19:02:33.829750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.119 [2024-10-08 19:02:33.829884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:05.119 [2024-10-08 19:02:33.830025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.727 ms 00:35:05.119 [2024-10-08 19:02:33.830064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.119 [2024-10-08 19:02:33.830174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.119 [2024-10-08 19:02:33.830210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:05.119 [2024-10-08 19:02:33.830301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:35:05.119 [2024-10-08 19:02:33.830337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.119 [2024-10-08 19:02:33.830413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.119 [2024-10-08 19:02:33.830449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:05.119 [2024-10-08 19:02:33.830480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:35:05.119 [2024-10-08 19:02:33.830615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.119 [2024-10-08 19:02:33.830664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:05.119 [2024-10-08 19:02:33.835656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.119 [2024-10-08 19:02:33.835791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:05.119 [2024-10-08 19:02:33.835925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:35:05.119 [2024-10-08 19:02:33.835978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.119 [2024-10-08 19:02:33.836040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.119 [2024-10-08 19:02:33.836073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:05.119 [2024-10-08 19:02:33.836105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:05.119 [2024-10-08 19:02:33.836187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.119 [2024-10-08 19:02:33.836280] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:05.119 [2024-10-08 19:02:33.836328] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:05.119 [2024-10-08 19:02:33.836402] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:05.119 [2024-10-08 19:02:33.836518] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:05.119 [2024-10-08 19:02:33.836618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:05.119 [2024-10-08 19:02:33.836631] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:05.119 [2024-10-08 19:02:33.836645] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:05.119 [2024-10-08 19:02:33.836664] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:05.119 [2024-10-08 19:02:33.836677] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:05.119 [2024-10-08 19:02:33.836688] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:05.119 [2024-10-08 19:02:33.836698] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:05.119 [2024-10-08 19:02:33.836708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:05.119 [2024-10-08 19:02:33.836718] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:05.119 [2024-10-08 19:02:33.836730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.119 [2024-10-08 19:02:33.836740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:05.119 [2024-10-08 19:02:33.836751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:35:05.119 [2024-10-08 19:02:33.836762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.119 [2024-10-08 19:02:33.836844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.119 [2024-10-08 19:02:33.836859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:05.119 [2024-10-08 19:02:33.836870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:35:05.119 [2024-10-08 19:02:33.836880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.119 [2024-10-08 19:02:33.836989] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:05.119 [2024-10-08 19:02:33.837005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:05.119 [2024-10-08 19:02:33.837017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:05.119 [2024-10-08 19:02:33.837048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:05.119 [2024-10-08 19:02:33.837077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:05.119 [2024-10-08 19:02:33.837096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:05.119 [2024-10-08 19:02:33.837106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:05.119 [2024-10-08 19:02:33.837115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:05.119 [2024-10-08 19:02:33.837134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:05.119 [2024-10-08 19:02:33.837144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:05.119 [2024-10-08 19:02:33.837154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:05.119 [2024-10-08 19:02:33.837174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:05.119 [2024-10-08 19:02:33.837203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:05.119 [2024-10-08 19:02:33.837231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:05.119 [2024-10-08 19:02:33.837259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:05.119 [2024-10-08 19:02:33.837287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:05.119 [2024-10-08 19:02:33.837315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:05.119 [2024-10-08 19:02:33.837334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:05.119 [2024-10-08 19:02:33.837343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:05.119 [2024-10-08 19:02:33.837352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:05.119 [2024-10-08 19:02:33.837361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:05.119 [2024-10-08 19:02:33.837371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:05.119 [2024-10-08 19:02:33.837380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:05.119 [2024-10-08 19:02:33.837398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:05.119 [2024-10-08 19:02:33.837407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837416] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:05.119 [2024-10-08 19:02:33.837427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:05.119 [2024-10-08 19:02:33.837440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:05.119 [2024-10-08 19:02:33.837460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:05.119 [2024-10-08 19:02:33.837472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:05.119 [2024-10-08 19:02:33.837482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:05.119 [2024-10-08 19:02:33.837491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:05.119 [2024-10-08 19:02:33.837501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:05.119 [2024-10-08 19:02:33.837510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:05.119 [2024-10-08 19:02:33.837522] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:05.119 [2024-10-08 19:02:33.837535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:05.119 [2024-10-08 19:02:33.837547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:05.119 [2024-10-08 19:02:33.837558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:05.119 [2024-10-08 19:02:33.837568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:05.119 [2024-10-08 19:02:33.837579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:05.119 [2024-10-08 19:02:33.837589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:05.119 [2024-10-08 19:02:33.837600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:05.119 [2024-10-08 19:02:33.837610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:05.119 [2024-10-08 19:02:33.837621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:05.119 [2024-10-08 19:02:33.837631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:05.120 [2024-10-08 19:02:33.837642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:05.120 [2024-10-08 19:02:33.837652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:05.120 [2024-10-08 19:02:33.837662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:05.120 [2024-10-08 19:02:33.837672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:05.120 [2024-10-08 19:02:33.837683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:05.120 [2024-10-08 19:02:33.837694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:05.120 [2024-10-08 19:02:33.837705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:05.120 [2024-10-08 19:02:33.837717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:05.120 [2024-10-08 19:02:33.837727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:05.120 [2024-10-08 19:02:33.837738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:05.120 [2024-10-08 19:02:33.837748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:05.120 [2024-10-08 19:02:33.837759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.120 [2024-10-08 19:02:33.837769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:05.120 [2024-10-08 19:02:33.837779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:35:05.120 [2024-10-08 19:02:33.837789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.888103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.888149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:05.379 [2024-10-08 19:02:33.888165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.262 ms 00:35:05.379 [2024-10-08 19:02:33.888176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.888266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.888277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:05.379 [2024-10-08 19:02:33.888288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:35:05.379 [2024-10-08 19:02:33.888299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.933842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.934082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:05.379 [2024-10-08 19:02:33.934113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.475 ms 00:35:05.379 [2024-10-08 19:02:33.934124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.934169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.934181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:05.379 [2024-10-08 19:02:33.934192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:05.379 [2024-10-08 19:02:33.934202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.934693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.934707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:05.379 [2024-10-08 19:02:33.934719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:35:05.379 [2024-10-08 19:02:33.934734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.934849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.934863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:05.379 [2024-10-08 19:02:33.934874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:35:05.379 [2024-10-08 19:02:33.934884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.953293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.953459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:05.379 [2024-10-08 19:02:33.953482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.387 ms 00:35:05.379 [2024-10-08 19:02:33.953493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.969896] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:35:05.379 [2024-10-08 19:02:33.969940] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:05.379 [2024-10-08 19:02:33.969988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.970003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:05.379 [2024-10-08 19:02:33.970017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.375 ms 00:35:05.379 [2024-10-08 19:02:33.970030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.379 [2024-10-08 19:02:33.997914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.379 [2024-10-08 19:02:33.997974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:05.379 [2024-10-08 19:02:33.997992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.833 ms 00:35:05.379 [2024-10-08 19:02:33.998007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.380 [2024-10-08 19:02:34.014649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.380 [2024-10-08 19:02:34.014694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:05.380 [2024-10-08 19:02:34.014713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.581 ms 00:35:05.380 [2024-10-08 19:02:34.014728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.380 [2024-10-08 19:02:34.032633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.380 [2024-10-08 19:02:34.032672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:05.380 [2024-10-08 19:02:34.032686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.841 ms 00:35:05.380 [2024-10-08 19:02:34.032697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.380 [2024-10-08 19:02:34.033578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.380 [2024-10-08 19:02:34.033613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:05.380 [2024-10-08 19:02:34.033628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:35:05.380 [2024-10-08 19:02:34.033640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.380 [2024-10-08 19:02:34.122633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.380 [2024-10-08 19:02:34.122702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:05.380 [2024-10-08 19:02:34.122720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.967 ms 00:35:05.380 [2024-10-08 19:02:34.122731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.639 [2024-10-08 19:02:34.134573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:05.639 [2024-10-08 19:02:34.137549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.639 [2024-10-08 19:02:34.137581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:05.639 [2024-10-08 19:02:34.137597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.762 ms 00:35:05.639 [2024-10-08 19:02:34.137613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.639 [2024-10-08 19:02:34.137705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.639 [2024-10-08 19:02:34.137718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:05.639 [2024-10-08 19:02:34.137731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:05.639 [2024-10-08 19:02:34.137741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.639 [2024-10-08 19:02:34.137816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.639 [2024-10-08 19:02:34.137829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:05.639 [2024-10-08 19:02:34.137840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:35:05.639 [2024-10-08 19:02:34.137850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.639 [2024-10-08 19:02:34.137875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.639 [2024-10-08 19:02:34.137886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:05.639 [2024-10-08 19:02:34.137897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:05.639 [2024-10-08 19:02:34.137907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.639 [2024-10-08 19:02:34.137943] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:05.639 [2024-10-08 19:02:34.137970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.639 [2024-10-08 19:02:34.137981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:05.639 [2024-10-08 19:02:34.137992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:05.639 [2024-10-08 19:02:34.138007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.639 [2024-10-08 19:02:34.175732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.639 [2024-10-08 19:02:34.175781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:05.639 [2024-10-08 19:02:34.175797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.701 ms 00:35:05.639 [2024-10-08 19:02:34.175808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.639 [2024-10-08 19:02:34.175890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:05.639 [2024-10-08 19:02:34.175904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:05.639 [2024-10-08 19:02:34.175916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:35:05.639 [2024-10-08 19:02:34.175926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:05.639 [2024-10-08 19:02:34.177315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.076 ms, result 0 00:35:06.576  [2024-10-08T19:02:36.266Z] Copying: 31/1024 [MB] (31 MBps) [2024-10-08T19:02:37.202Z] Copying: 65/1024 [MB] (33 MBps) [2024-10-08T19:02:38.577Z] Copying: 97/1024 [MB] (32 MBps) [2024-10-08T19:02:39.513Z] Copying: 131/1024 [MB] (33 MBps) [2024-10-08T19:02:40.449Z] Copying: 164/1024 [MB] (33 MBps) [2024-10-08T19:02:41.402Z] Copying: 195/1024 [MB] (30 MBps) [2024-10-08T19:02:42.338Z] Copying: 226/1024 [MB] (31 MBps) [2024-10-08T19:02:43.274Z] Copying: 257/1024 [MB] (30 MBps) [2024-10-08T19:02:44.218Z] Copying: 287/1024 [MB] (30 MBps) [2024-10-08T19:02:45.594Z] Copying: 318/1024 [MB] (31 MBps) [2024-10-08T19:02:46.531Z] Copying: 349/1024 [MB] (30 MBps) [2024-10-08T19:02:47.467Z] Copying: 381/1024 [MB] (31 MBps) [2024-10-08T19:02:48.404Z] Copying: 412/1024 [MB] (30 MBps) [2024-10-08T19:02:49.338Z] Copying: 445/1024 [MB] (32 MBps) [2024-10-08T19:02:50.272Z] Copying: 479/1024 [MB] (34 MBps) [2024-10-08T19:02:51.207Z] Copying: 514/1024 [MB] (34 MBps) [2024-10-08T19:02:52.581Z] Copying: 548/1024 [MB] (33 MBps) [2024-10-08T19:02:53.515Z] Copying: 580/1024 [MB] (32 MBps) [2024-10-08T19:02:54.449Z] Copying: 598/1024 [MB] (18 MBps) [2024-10-08T19:02:55.384Z] Copying: 634/1024 [MB] (35 MBps) [2024-10-08T19:02:56.318Z] Copying: 667/1024 [MB] (32 MBps) [2024-10-08T19:02:57.334Z] Copying: 703/1024 [MB] (36 MBps) [2024-10-08T19:02:58.270Z] Copying: 739/1024 [MB] (35 MBps) [2024-10-08T19:02:59.202Z] Copying: 774/1024 [MB] (35 MBps) [2024-10-08T19:03:00.577Z] Copying: 811/1024 [MB] (37 MBps) [2024-10-08T19:03:01.512Z] Copying: 849/1024 [MB] (38 MBps) [2024-10-08T19:03:02.475Z] Copying: 886/1024 [MB] (36 MBps) [2024-10-08T19:03:03.429Z] Copying: 922/1024 [MB] (36 MBps) [2024-10-08T19:03:04.366Z] Copying: 956/1024 [MB] (33 MBps) [2024-10-08T19:03:05.303Z] Copying: 989/1024 [MB] (32 MBps) [2024-10-08T19:03:06.297Z] Copying: 1022/1024 [MB] (33 MBps) [2024-10-08T19:03:06.297Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-10-08 19:03:06.059584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.059673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:37.540 [2024-10-08 19:03:06.059693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:37.540 [2024-10-08 19:03:06.059705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.540 [2024-10-08 19:03:06.062436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:37.540 [2024-10-08 19:03:06.068196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.068362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:37.540 [2024-10-08 19:03:06.068388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.711 ms 00:35:37.540 [2024-10-08 19:03:06.068409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.540 [2024-10-08 19:03:06.082137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.082305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:37.540 [2024-10-08 19:03:06.082331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.911 ms 00:35:37.540 [2024-10-08 19:03:06.082342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.540 [2024-10-08 19:03:06.103549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.103607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:37.540 [2024-10-08 19:03:06.103628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.178 ms 00:35:37.540 [2024-10-08 19:03:06.103642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.540 [2024-10-08 19:03:06.109054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.109092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:37.540 [2024-10-08 19:03:06.109105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.360 ms 00:35:37.540 [2024-10-08 19:03:06.109116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.540 [2024-10-08 19:03:06.148717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.148779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:37.540 [2024-10-08 19:03:06.148797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.532 ms 00:35:37.540 [2024-10-08 19:03:06.148808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.540 [2024-10-08 19:03:06.171596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.171855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:37.540 [2024-10-08 19:03:06.171879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.726 ms 00:35:37.540 [2024-10-08 19:03:06.171891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.540 [2024-10-08 19:03:06.254524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.254632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:37.540 [2024-10-08 19:03:06.254667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.572 ms 00:35:37.540 [2024-10-08 19:03:06.254679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.540 [2024-10-08 19:03:06.294543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.540 [2024-10-08 19:03:06.294606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:37.540 [2024-10-08 19:03:06.294624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.839 ms 00:35:37.540 [2024-10-08 19:03:06.294635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.799 [2024-10-08 19:03:06.332199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.799 [2024-10-08 19:03:06.332466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:37.799 [2024-10-08 19:03:06.332492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.511 ms 00:35:37.799 [2024-10-08 19:03:06.332504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.799 [2024-10-08 19:03:06.371953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.799 [2024-10-08 19:03:06.372025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:37.799 [2024-10-08 19:03:06.372042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.391 ms 00:35:37.799 [2024-10-08 19:03:06.372053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.799 [2024-10-08 19:03:06.413620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.799 [2024-10-08 19:03:06.413834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:37.799 [2024-10-08 19:03:06.413859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.457 ms 00:35:37.799 [2024-10-08 19:03:06.413871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.799 [2024-10-08 19:03:06.413946] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:37.799 [2024-10-08 19:03:06.413981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105216 / 261120 wr_cnt: 1 state: open 00:35:37.799 [2024-10-08 19:03:06.413997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:37.799 [2024-10-08 19:03:06.414409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.414994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:37.800 [2024-10-08 19:03:06.415178] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:37.800 [2024-10-08 19:03:06.415199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88cec975-96d4-4e29-9174-d0217503c41a 00:35:37.800 [2024-10-08 19:03:06.415218] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105216 00:35:37.800 [2024-10-08 19:03:06.415228] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106176 00:35:37.800 [2024-10-08 19:03:06.415239] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105216 00:35:37.800 [2024-10-08 19:03:06.415252] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:35:37.800 [2024-10-08 19:03:06.415262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:37.800 [2024-10-08 19:03:06.415273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:37.800 [2024-10-08 19:03:06.415284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:37.800 [2024-10-08 19:03:06.415294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:37.800 [2024-10-08 19:03:06.415305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:37.800 [2024-10-08 19:03:06.415316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.800 [2024-10-08 19:03:06.415340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:37.800 [2024-10-08 19:03:06.415351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.371 ms 00:35:37.800 [2024-10-08 19:03:06.415362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.800 [2024-10-08 19:03:06.436423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.800 [2024-10-08 19:03:06.436605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:37.800 [2024-10-08 19:03:06.436628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.006 ms 00:35:37.800 [2024-10-08 19:03:06.436639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.800 [2024-10-08 19:03:06.437244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:37.800 [2024-10-08 19:03:06.437259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:37.800 [2024-10-08 19:03:06.437271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:35:37.800 [2024-10-08 19:03:06.437288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.800 [2024-10-08 19:03:06.485676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:37.800 [2024-10-08 19:03:06.485899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:37.800 [2024-10-08 19:03:06.485927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:37.800 [2024-10-08 19:03:06.485939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.800 [2024-10-08 19:03:06.486043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:37.800 [2024-10-08 19:03:06.486057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:37.800 [2024-10-08 19:03:06.486068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:37.800 [2024-10-08 19:03:06.486084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.800 [2024-10-08 19:03:06.486173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:37.800 [2024-10-08 19:03:06.486188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:37.800 [2024-10-08 19:03:06.486199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:37.800 [2024-10-08 19:03:06.486210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:37.800 [2024-10-08 19:03:06.486230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:37.800 [2024-10-08 19:03:06.486242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:37.800 [2024-10-08 19:03:06.486254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:37.800 [2024-10-08 19:03:06.486265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.059 [2024-10-08 19:03:06.617230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:38.059 [2024-10-08 19:03:06.617285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:38.059 [2024-10-08 19:03:06.617301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:38.059 [2024-10-08 19:03:06.617328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.059 [2024-10-08 19:03:06.725208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:38.059 [2024-10-08 19:03:06.725467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:38.059 [2024-10-08 19:03:06.725495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:38.059 [2024-10-08 19:03:06.725517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.059 [2024-10-08 19:03:06.725622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:38.059 [2024-10-08 19:03:06.725635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:38.059 [2024-10-08 19:03:06.725648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:38.059 [2024-10-08 19:03:06.725659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.059 [2024-10-08 19:03:06.725706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:38.059 [2024-10-08 19:03:06.725719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:38.059 [2024-10-08 19:03:06.725730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:38.059 [2024-10-08 19:03:06.725741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.059 [2024-10-08 19:03:06.725871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:38.059 [2024-10-08 19:03:06.725886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:38.059 [2024-10-08 19:03:06.725898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:38.059 [2024-10-08 19:03:06.725909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.059 [2024-10-08 19:03:06.725946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:38.059 [2024-10-08 19:03:06.725985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:38.059 [2024-10-08 19:03:06.725998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:38.059 [2024-10-08 19:03:06.726008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.059 [2024-10-08 19:03:06.726053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:38.059 [2024-10-08 19:03:06.726066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:38.059 [2024-10-08 19:03:06.726078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:38.059 [2024-10-08 19:03:06.726090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.059 [2024-10-08 19:03:06.726135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:38.059 [2024-10-08 19:03:06.726148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:38.059 [2024-10-08 19:03:06.726159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:38.060 [2024-10-08 19:03:06.726171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:38.060 [2024-10-08 19:03:06.726296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 667.397 ms, result 0 00:35:39.960 00:35:39.960 00:35:39.960 19:03:08 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:40.218 [2024-10-08 19:03:08.729047] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:35:40.218 [2024-10-08 19:03:08.729449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78780 ] 00:35:40.218 [2024-10-08 19:03:08.905923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.477 [2024-10-08 19:03:09.125874] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.737 [2024-10-08 19:03:09.486712] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:40.737 [2024-10-08 19:03:09.486780] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:40.998 [2024-10-08 19:03:09.648648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.648871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:40.998 [2024-10-08 19:03:09.648897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:40.998 [2024-10-08 19:03:09.648914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.648996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.649011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:40.998 [2024-10-08 19:03:09.649021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:35:40.998 [2024-10-08 19:03:09.649032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.649055] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:40.998 [2024-10-08 19:03:09.650099] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:40.998 [2024-10-08 19:03:09.650132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.650143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:40.998 [2024-10-08 19:03:09.650155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:35:40.998 [2024-10-08 19:03:09.650165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.651657] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:40.998 [2024-10-08 19:03:09.672212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.672278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:40.998 [2024-10-08 19:03:09.672294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.556 ms 00:35:40.998 [2024-10-08 19:03:09.672305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.672374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.672388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:40.998 [2024-10-08 19:03:09.672400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:35:40.998 [2024-10-08 19:03:09.672411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.679353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.679384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:40.998 [2024-10-08 19:03:09.679397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:35:40.998 [2024-10-08 19:03:09.679407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.679494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.679509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:40.998 [2024-10-08 19:03:09.679520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:35:40.998 [2024-10-08 19:03:09.679531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.679579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.679591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:40.998 [2024-10-08 19:03:09.679603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:40.998 [2024-10-08 19:03:09.679613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.679638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:40.998 [2024-10-08 19:03:09.684779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.684938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:40.998 [2024-10-08 19:03:09.684991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.148 ms 00:35:40.998 [2024-10-08 19:03:09.685004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.685054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.685067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:40.998 [2024-10-08 19:03:09.685079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:40.998 [2024-10-08 19:03:09.685089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.685157] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:40.998 [2024-10-08 19:03:09.685183] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:40.998 [2024-10-08 19:03:09.685227] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:40.998 [2024-10-08 19:03:09.685247] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:40.998 [2024-10-08 19:03:09.685347] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:40.998 [2024-10-08 19:03:09.685361] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:40.998 [2024-10-08 19:03:09.685376] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:40.998 [2024-10-08 19:03:09.685394] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:40.998 [2024-10-08 19:03:09.685407] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:40.998 [2024-10-08 19:03:09.685419] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:40.998 [2024-10-08 19:03:09.685431] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:40.998 [2024-10-08 19:03:09.685442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:40.998 [2024-10-08 19:03:09.685452] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:40.998 [2024-10-08 19:03:09.685463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.685475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:40.998 [2024-10-08 19:03:09.685486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:35:40.998 [2024-10-08 19:03:09.685497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.685579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.998 [2024-10-08 19:03:09.685594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:40.998 [2024-10-08 19:03:09.685606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:35:40.998 [2024-10-08 19:03:09.685617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.998 [2024-10-08 19:03:09.685721] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:40.998 [2024-10-08 19:03:09.685737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:40.998 [2024-10-08 19:03:09.685749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:40.998 [2024-10-08 19:03:09.685760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.998 [2024-10-08 19:03:09.685772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:40.998 [2024-10-08 19:03:09.685782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:40.998 [2024-10-08 19:03:09.685793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:40.998 [2024-10-08 19:03:09.685803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:40.998 [2024-10-08 19:03:09.685815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:40.998 [2024-10-08 19:03:09.685826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:40.998 [2024-10-08 19:03:09.685836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:40.998 [2024-10-08 19:03:09.685846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:40.998 [2024-10-08 19:03:09.685857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:40.998 [2024-10-08 19:03:09.685877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:40.998 [2024-10-08 19:03:09.685887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:40.998 [2024-10-08 19:03:09.685897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.998 [2024-10-08 19:03:09.685908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:40.998 [2024-10-08 19:03:09.685918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:40.998 [2024-10-08 19:03:09.685927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.998 [2024-10-08 19:03:09.685938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:40.998 [2024-10-08 19:03:09.685948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:40.998 [2024-10-08 19:03:09.685970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:40.998 [2024-10-08 19:03:09.685981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:40.998 [2024-10-08 19:03:09.685992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:40.998 [2024-10-08 19:03:09.686002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:40.998 [2024-10-08 19:03:09.686012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:40.998 [2024-10-08 19:03:09.686024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:40.999 [2024-10-08 19:03:09.686035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:40.999 [2024-10-08 19:03:09.686045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:40.999 [2024-10-08 19:03:09.686055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:40.999 [2024-10-08 19:03:09.686065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:40.999 [2024-10-08 19:03:09.686075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:40.999 [2024-10-08 19:03:09.686085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:40.999 [2024-10-08 19:03:09.686095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:40.999 [2024-10-08 19:03:09.686105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:40.999 [2024-10-08 19:03:09.686115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:40.999 [2024-10-08 19:03:09.686124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:40.999 [2024-10-08 19:03:09.686135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:40.999 [2024-10-08 19:03:09.686145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:40.999 [2024-10-08 19:03:09.686157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.999 [2024-10-08 19:03:09.686167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:40.999 [2024-10-08 19:03:09.686177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:40.999 [2024-10-08 19:03:09.686187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.999 [2024-10-08 19:03:09.686196] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:40.999 [2024-10-08 19:03:09.686207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:40.999 [2024-10-08 19:03:09.686222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:40.999 [2024-10-08 19:03:09.686232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.999 [2024-10-08 19:03:09.686243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:40.999 [2024-10-08 19:03:09.686254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:40.999 [2024-10-08 19:03:09.686264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:40.999 [2024-10-08 19:03:09.686274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:40.999 [2024-10-08 19:03:09.686284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:40.999 [2024-10-08 19:03:09.686294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:40.999 [2024-10-08 19:03:09.686306] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:40.999 [2024-10-08 19:03:09.686319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:40.999 [2024-10-08 19:03:09.686331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:40.999 [2024-10-08 19:03:09.686342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:40.999 [2024-10-08 19:03:09.686353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:40.999 [2024-10-08 19:03:09.686375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:40.999 [2024-10-08 19:03:09.686386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:40.999 [2024-10-08 19:03:09.686396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:40.999 [2024-10-08 19:03:09.686406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:40.999 [2024-10-08 19:03:09.686416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:40.999 [2024-10-08 19:03:09.686443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:40.999 [2024-10-08 19:03:09.686454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:40.999 [2024-10-08 19:03:09.686466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:40.999 [2024-10-08 19:03:09.686477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:40.999 [2024-10-08 19:03:09.686487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:40.999 [2024-10-08 19:03:09.686499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:40.999 [2024-10-08 19:03:09.686510] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:40.999 [2024-10-08 19:03:09.686522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:40.999 [2024-10-08 19:03:09.686534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:40.999 [2024-10-08 19:03:09.686545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:40.999 [2024-10-08 19:03:09.686556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:40.999 [2024-10-08 19:03:09.686567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:40.999 [2024-10-08 19:03:09.686580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.999 [2024-10-08 19:03:09.686591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:40.999 [2024-10-08 19:03:09.686602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:35:40.999 [2024-10-08 19:03:09.686612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.999 [2024-10-08 19:03:09.731660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.999 [2024-10-08 19:03:09.731947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:40.999 [2024-10-08 19:03:09.731998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.993 ms 00:35:40.999 [2024-10-08 19:03:09.732011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.999 [2024-10-08 19:03:09.732131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.999 [2024-10-08 19:03:09.732144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:40.999 [2024-10-08 19:03:09.732156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:35:40.999 [2024-10-08 19:03:09.732167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.779891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.779944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:41.258 [2024-10-08 19:03:09.779981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.634 ms 00:35:41.258 [2024-10-08 19:03:09.779993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.780051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.780061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:41.258 [2024-10-08 19:03:09.780089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:41.258 [2024-10-08 19:03:09.780099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.780605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.780625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:41.258 [2024-10-08 19:03:09.780636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:35:41.258 [2024-10-08 19:03:09.780653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.780772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.780786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:41.258 [2024-10-08 19:03:09.780797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:35:41.258 [2024-10-08 19:03:09.780807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.799000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.799253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:41.258 [2024-10-08 19:03:09.799281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.170 ms 00:35:41.258 [2024-10-08 19:03:09.799294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.819647] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:41.258 [2024-10-08 19:03:09.819704] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:41.258 [2024-10-08 19:03:09.819724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.819735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:41.258 [2024-10-08 19:03:09.819750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.259 ms 00:35:41.258 [2024-10-08 19:03:09.819761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.851115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.851172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:41.258 [2024-10-08 19:03:09.851189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.299 ms 00:35:41.258 [2024-10-08 19:03:09.851201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.871118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.871194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:41.258 [2024-10-08 19:03:09.871210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.858 ms 00:35:41.258 [2024-10-08 19:03:09.871221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.890810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.891046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:41.258 [2024-10-08 19:03:09.891071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.513 ms 00:35:41.258 [2024-10-08 19:03:09.891082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.891937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.891978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:41.258 [2024-10-08 19:03:09.891992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:35:41.258 [2024-10-08 19:03:09.892002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.981494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.258 [2024-10-08 19:03:09.981566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:41.258 [2024-10-08 19:03:09.981584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.457 ms 00:35:41.258 [2024-10-08 19:03:09.981595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.258 [2024-10-08 19:03:09.993389] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:41.259 [2024-10-08 19:03:09.996886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.259 [2024-10-08 19:03:09.996923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:41.259 [2024-10-08 19:03:09.996939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.223 ms 00:35:41.259 [2024-10-08 19:03:09.996969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.259 [2024-10-08 19:03:09.997079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.259 [2024-10-08 19:03:09.997092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:41.259 [2024-10-08 19:03:09.997104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:41.259 [2024-10-08 19:03:09.997114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.259 [2024-10-08 19:03:09.998703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.259 [2024-10-08 19:03:09.998744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:41.259 [2024-10-08 19:03:09.998757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:35:41.259 [2024-10-08 19:03:09.998767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.259 [2024-10-08 19:03:09.998816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.259 [2024-10-08 19:03:09.998827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:41.259 [2024-10-08 19:03:09.998839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:41.259 [2024-10-08 19:03:09.998849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.259 [2024-10-08 19:03:09.998884] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:41.259 [2024-10-08 19:03:09.998897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.259 [2024-10-08 19:03:09.998907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:41.259 [2024-10-08 19:03:09.998918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:35:41.259 [2024-10-08 19:03:09.998932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.519 [2024-10-08 19:03:10.039963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.519 [2024-10-08 19:03:10.040196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:41.519 [2024-10-08 19:03:10.040325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.003 ms 00:35:41.519 [2024-10-08 19:03:10.040368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.519 [2024-10-08 19:03:10.040481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:41.519 [2024-10-08 19:03:10.040532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:41.519 [2024-10-08 19:03:10.040563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:35:41.519 [2024-10-08 19:03:10.040666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:41.519 [2024-10-08 19:03:10.042036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.789 ms, result 0 00:35:42.898  [2024-10-08T19:03:12.594Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-08T19:03:13.531Z] Copying: 59/1024 [MB] (32 MBps) [2024-10-08T19:03:14.468Z] Copying: 91/1024 [MB] (32 MBps) [2024-10-08T19:03:15.411Z] Copying: 123/1024 [MB] (31 MBps) [2024-10-08T19:03:16.348Z] Copying: 155/1024 [MB] (31 MBps) [2024-10-08T19:03:17.284Z] Copying: 187/1024 [MB] (32 MBps) [2024-10-08T19:03:18.662Z] Copying: 220/1024 [MB] (32 MBps) [2024-10-08T19:03:19.601Z] Copying: 251/1024 [MB] (31 MBps) [2024-10-08T19:03:20.538Z] Copying: 283/1024 [MB] (31 MBps) [2024-10-08T19:03:21.474Z] Copying: 315/1024 [MB] (32 MBps) [2024-10-08T19:03:22.410Z] Copying: 345/1024 [MB] (30 MBps) [2024-10-08T19:03:23.346Z] Copying: 373/1024 [MB] (28 MBps) [2024-10-08T19:03:24.331Z] Copying: 403/1024 [MB] (29 MBps) [2024-10-08T19:03:25.710Z] Copying: 433/1024 [MB] (30 MBps) [2024-10-08T19:03:26.645Z] Copying: 464/1024 [MB] (31 MBps) [2024-10-08T19:03:27.582Z] Copying: 496/1024 [MB] (32 MBps) [2024-10-08T19:03:28.414Z] Copying: 529/1024 [MB] (32 MBps) [2024-10-08T19:03:29.349Z] Copying: 561/1024 [MB] (32 MBps) [2024-10-08T19:03:30.285Z] Copying: 593/1024 [MB] (31 MBps) [2024-10-08T19:03:31.662Z] Copying: 624/1024 [MB] (31 MBps) [2024-10-08T19:03:32.599Z] Copying: 657/1024 [MB] (32 MBps) [2024-10-08T19:03:33.536Z] Copying: 688/1024 [MB] (31 MBps) [2024-10-08T19:03:34.473Z] Copying: 720/1024 [MB] (32 MBps) [2024-10-08T19:03:35.409Z] Copying: 753/1024 [MB] (32 MBps) [2024-10-08T19:03:36.345Z] Copying: 785/1024 [MB] (32 MBps) [2024-10-08T19:03:37.283Z] Copying: 817/1024 [MB] (32 MBps) [2024-10-08T19:03:38.658Z] Copying: 850/1024 [MB] (32 MBps) [2024-10-08T19:03:39.595Z] Copying: 882/1024 [MB] (32 MBps) [2024-10-08T19:03:40.532Z] Copying: 911/1024 [MB] (28 MBps) [2024-10-08T19:03:41.469Z] Copying: 944/1024 [MB] (33 MBps) [2024-10-08T19:03:42.406Z] Copying: 977/1024 [MB] (32 MBps) [2024-10-08T19:03:42.973Z] Copying: 1011/1024 [MB] (33 MBps) [2024-10-08T19:03:43.232Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-10-08 19:03:43.051494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.475 [2024-10-08 19:03:43.051575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:14.475 [2024-10-08 19:03:43.051594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:14.475 [2024-10-08 19:03:43.051606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.475 [2024-10-08 19:03:43.051633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:14.475 [2024-10-08 19:03:43.057252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.475 [2024-10-08 19:03:43.057330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:14.475 [2024-10-08 19:03:43.057348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.596 ms 00:36:14.475 [2024-10-08 19:03:43.057371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.475 [2024-10-08 19:03:43.057619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.475 [2024-10-08 19:03:43.057634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:14.475 [2024-10-08 19:03:43.057647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:36:14.475 [2024-10-08 19:03:43.057659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.475 [2024-10-08 19:03:43.063860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.475 [2024-10-08 19:03:43.064137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:14.475 [2024-10-08 19:03:43.064289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:36:14.475 [2024-10-08 19:03:43.064341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.475 [2024-10-08 19:03:43.071938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.475 [2024-10-08 19:03:43.072222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:14.475 [2024-10-08 19:03:43.072250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.500 ms 00:36:14.475 [2024-10-08 19:03:43.072263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.475 [2024-10-08 19:03:43.115438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.475 [2024-10-08 19:03:43.115524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:14.475 [2024-10-08 19:03:43.115543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.088 ms 00:36:14.475 [2024-10-08 19:03:43.115555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.475 [2024-10-08 19:03:43.139674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.475 [2024-10-08 19:03:43.139752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:14.475 [2024-10-08 19:03:43.139770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.035 ms 00:36:14.475 [2024-10-08 19:03:43.139782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.475 [2024-10-08 19:03:43.222882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.475 [2024-10-08 19:03:43.223011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:14.475 [2024-10-08 19:03:43.223047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.004 ms 00:36:14.475 [2024-10-08 19:03:43.223061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.736 [2024-10-08 19:03:43.266304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.737 [2024-10-08 19:03:43.266416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:14.737 [2024-10-08 19:03:43.266435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.215 ms 00:36:14.737 [2024-10-08 19:03:43.266447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.737 [2024-10-08 19:03:43.309141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.737 [2024-10-08 19:03:43.309473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:14.737 [2024-10-08 19:03:43.309519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.616 ms 00:36:14.737 [2024-10-08 19:03:43.309532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.737 [2024-10-08 19:03:43.353650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.737 [2024-10-08 19:03:43.353713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:14.737 [2024-10-08 19:03:43.353731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.046 ms 00:36:14.737 [2024-10-08 19:03:43.353760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.737 [2024-10-08 19:03:43.396967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.737 [2024-10-08 19:03:43.397051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:14.737 [2024-10-08 19:03:43.397071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.068 ms 00:36:14.737 [2024-10-08 19:03:43.397083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.737 [2024-10-08 19:03:43.397165] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:14.737 [2024-10-08 19:03:43.397186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:14.737 [2024-10-08 19:03:43.397202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.397988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.398000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.398012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:14.737 [2024-10-08 19:03:43.398023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:14.738 [2024-10-08 19:03:43.398378] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:14.738 [2024-10-08 19:03:43.398389] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88cec975-96d4-4e29-9174-d0217503c41a 00:36:14.738 [2024-10-08 19:03:43.398412] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:14.738 [2024-10-08 19:03:43.398423] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 26816 00:36:14.738 [2024-10-08 19:03:43.398434] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 25856 00:36:14.738 [2024-10-08 19:03:43.398446] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0371 00:36:14.738 [2024-10-08 19:03:43.398456] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:14.738 [2024-10-08 19:03:43.398467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:14.738 [2024-10-08 19:03:43.398478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:14.738 [2024-10-08 19:03:43.398489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:14.738 [2024-10-08 19:03:43.398499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:14.738 [2024-10-08 19:03:43.398510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.738 [2024-10-08 19:03:43.398521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:14.738 [2024-10-08 19:03:43.398547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:36:14.738 [2024-10-08 19:03:43.398558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.738 [2024-10-08 19:03:43.420806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.738 [2024-10-08 19:03:43.420876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:14.738 [2024-10-08 19:03:43.420894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.183 ms 00:36:14.738 [2024-10-08 19:03:43.420907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.738 [2024-10-08 19:03:43.421572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.738 [2024-10-08 19:03:43.421595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:14.738 [2024-10-08 19:03:43.421607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:36:14.738 [2024-10-08 19:03:43.421629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.738 [2024-10-08 19:03:43.470337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.738 [2024-10-08 19:03:43.470425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:14.738 [2024-10-08 19:03:43.470443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.738 [2024-10-08 19:03:43.470456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.738 [2024-10-08 19:03:43.470533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.738 [2024-10-08 19:03:43.470545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:14.738 [2024-10-08 19:03:43.470556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.738 [2024-10-08 19:03:43.470575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.738 [2024-10-08 19:03:43.470661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.738 [2024-10-08 19:03:43.470676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:14.738 [2024-10-08 19:03:43.470688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.738 [2024-10-08 19:03:43.470699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.738 [2024-10-08 19:03:43.470718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.738 [2024-10-08 19:03:43.470730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:14.738 [2024-10-08 19:03:43.470741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.738 [2024-10-08 19:03:43.470752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.606372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.008 [2024-10-08 19:03:43.606447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:15.008 [2024-10-08 19:03:43.606463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.008 [2024-10-08 19:03:43.606491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.723007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.008 [2024-10-08 19:03:43.723079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:15.008 [2024-10-08 19:03:43.723096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.008 [2024-10-08 19:03:43.723120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.723229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.008 [2024-10-08 19:03:43.723244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:15.008 [2024-10-08 19:03:43.723256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.008 [2024-10-08 19:03:43.723267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.723317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.008 [2024-10-08 19:03:43.723330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:15.008 [2024-10-08 19:03:43.723342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.008 [2024-10-08 19:03:43.723353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.723489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.008 [2024-10-08 19:03:43.723505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:15.008 [2024-10-08 19:03:43.723518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.008 [2024-10-08 19:03:43.723529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.723569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.008 [2024-10-08 19:03:43.723582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:15.008 [2024-10-08 19:03:43.723594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.008 [2024-10-08 19:03:43.723605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.723650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.008 [2024-10-08 19:03:43.723664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:15.008 [2024-10-08 19:03:43.723675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.008 [2024-10-08 19:03:43.723686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.723733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.008 [2024-10-08 19:03:43.723746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:15.008 [2024-10-08 19:03:43.723757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.008 [2024-10-08 19:03:43.723768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.008 [2024-10-08 19:03:43.723894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 672.366 ms, result 0 00:36:16.386 00:36:16.386 00:36:16.386 19:03:45 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:18.296 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:18.296 19:03:46 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:18.296 19:03:46 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:36:18.296 19:03:46 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77397 00:36:18.555 19:03:47 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 77397 ']' 00:36:18.555 19:03:47 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 77397 00:36:18.555 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77397) - No such process 00:36:18.555 Process with pid 77397 is not found 00:36:18.555 19:03:47 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 77397 is not found' 00:36:18.555 Remove shared memory files 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:18.555 19:03:47 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:36:18.555 ************************************ 00:36:18.555 END TEST ftl_restore 00:36:18.555 ************************************ 00:36:18.555 00:36:18.555 real 2m53.237s 00:36:18.555 user 2m38.071s 00:36:18.555 sys 0m16.352s 00:36:18.555 19:03:47 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:18.555 19:03:47 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:36:18.555 19:03:47 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:36:18.555 19:03:47 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:36:18.555 19:03:47 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:18.555 19:03:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:18.555 ************************************ 00:36:18.555 START TEST ftl_dirty_shutdown 00:36:18.555 ************************************ 00:36:18.555 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:36:18.555 * Looking for test storage... 00:36:18.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:36:18.555 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:36:18.555 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:36:18.555 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:36:18.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.815 --rc genhtml_branch_coverage=1 00:36:18.815 --rc genhtml_function_coverage=1 00:36:18.815 --rc genhtml_legend=1 00:36:18.815 --rc geninfo_all_blocks=1 00:36:18.815 --rc geninfo_unexecuted_blocks=1 00:36:18.815 00:36:18.815 ' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:36:18.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.815 --rc genhtml_branch_coverage=1 00:36:18.815 --rc genhtml_function_coverage=1 00:36:18.815 --rc genhtml_legend=1 00:36:18.815 --rc geninfo_all_blocks=1 00:36:18.815 --rc geninfo_unexecuted_blocks=1 00:36:18.815 00:36:18.815 ' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:36:18.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.815 --rc genhtml_branch_coverage=1 00:36:18.815 --rc genhtml_function_coverage=1 00:36:18.815 --rc genhtml_legend=1 00:36:18.815 --rc geninfo_all_blocks=1 00:36:18.815 --rc geninfo_unexecuted_blocks=1 00:36:18.815 00:36:18.815 ' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:36:18.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:18.815 --rc genhtml_branch_coverage=1 00:36:18.815 --rc genhtml_function_coverage=1 00:36:18.815 --rc genhtml_legend=1 00:36:18.815 --rc geninfo_all_blocks=1 00:36:18.815 --rc geninfo_unexecuted_blocks=1 00:36:18.815 00:36:18.815 ' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:36:18.815 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79228 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79228 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 79228 ']' 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:18.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:18.816 19:03:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:18.816 [2024-10-08 19:03:47.538375] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:18.816 [2024-10-08 19:03:47.538790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79228 ] 00:36:19.074 [2024-10-08 19:03:47.717423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.332 [2024-10-08 19:03:48.027644] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.269 19:03:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:20.269 19:03:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:36:20.269 19:03:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:36:20.269 19:03:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:36:20.269 19:03:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:36:20.269 19:03:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:36:20.269 19:03:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:36:20.269 19:03:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:20.838 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:36:20.838 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:36:20.838 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:36:20.838 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:36:20.838 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:36:20.838 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:36:20.838 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:36:20.838 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:36:21.097 { 00:36:21.097 "name": "nvme0n1", 00:36:21.097 "aliases": [ 00:36:21.097 "8a313b82-4de8-45c7-b191-8a6d1b95ec4c" 00:36:21.097 ], 00:36:21.097 "product_name": "NVMe disk", 00:36:21.097 "block_size": 4096, 00:36:21.097 "num_blocks": 1310720, 00:36:21.097 "uuid": "8a313b82-4de8-45c7-b191-8a6d1b95ec4c", 00:36:21.097 "numa_id": -1, 00:36:21.097 "assigned_rate_limits": { 00:36:21.097 "rw_ios_per_sec": 0, 00:36:21.097 "rw_mbytes_per_sec": 0, 00:36:21.097 "r_mbytes_per_sec": 0, 00:36:21.097 "w_mbytes_per_sec": 0 00:36:21.097 }, 00:36:21.097 "claimed": true, 00:36:21.097 "claim_type": "read_many_write_one", 00:36:21.097 "zoned": false, 00:36:21.097 "supported_io_types": { 00:36:21.097 "read": true, 00:36:21.097 "write": true, 00:36:21.097 "unmap": true, 00:36:21.097 "flush": true, 00:36:21.097 "reset": true, 00:36:21.097 "nvme_admin": true, 00:36:21.097 "nvme_io": true, 00:36:21.097 "nvme_io_md": false, 00:36:21.097 "write_zeroes": true, 00:36:21.097 "zcopy": false, 00:36:21.097 "get_zone_info": false, 00:36:21.097 "zone_management": false, 00:36:21.097 "zone_append": false, 00:36:21.097 "compare": true, 00:36:21.097 "compare_and_write": false, 00:36:21.097 "abort": true, 00:36:21.097 "seek_hole": false, 00:36:21.097 "seek_data": false, 00:36:21.097 "copy": true, 00:36:21.097 "nvme_iov_md": false 00:36:21.097 }, 00:36:21.097 "driver_specific": { 00:36:21.097 "nvme": [ 00:36:21.097 { 00:36:21.097 "pci_address": "0000:00:11.0", 00:36:21.097 "trid": { 00:36:21.097 "trtype": "PCIe", 00:36:21.097 "traddr": "0000:00:11.0" 00:36:21.097 }, 00:36:21.097 "ctrlr_data": { 00:36:21.097 "cntlid": 0, 00:36:21.097 "vendor_id": "0x1b36", 00:36:21.097 "model_number": "QEMU NVMe Ctrl", 00:36:21.097 "serial_number": "12341", 00:36:21.097 "firmware_revision": "8.0.0", 00:36:21.097 "subnqn": "nqn.2019-08.org.qemu:12341", 00:36:21.097 "oacs": { 00:36:21.097 "security": 0, 00:36:21.097 "format": 1, 00:36:21.097 "firmware": 0, 00:36:21.097 "ns_manage": 1 00:36:21.097 }, 00:36:21.097 "multi_ctrlr": false, 00:36:21.097 "ana_reporting": false 00:36:21.097 }, 00:36:21.097 "vs": { 00:36:21.097 "nvme_version": "1.4" 00:36:21.097 }, 00:36:21.097 "ns_data": { 00:36:21.097 "id": 1, 00:36:21.097 "can_share": false 00:36:21.097 } 00:36:21.097 } 00:36:21.097 ], 00:36:21.097 "mp_policy": "active_passive" 00:36:21.097 } 00:36:21.097 } 00:36:21.097 ]' 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:21.097 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:21.356 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=ea1f3826-5211-4107-9e54-ceb0ca22b05c 00:36:21.356 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:36:21.356 19:03:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea1f3826-5211-4107-9e54-ceb0ca22b05c 00:36:21.614 19:03:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:36:21.873 19:03:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5901a52a-9a5f-492f-83dc-c6737f33f35b 00:36:21.873 19:03:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5901a52a-9a5f-492f-83dc-c6737f33f35b 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:36:22.132 19:03:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:22.391 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:36:22.391 { 00:36:22.391 "name": "8d474544-97f3-4f1e-b4b6-8d759ebe384a", 00:36:22.392 "aliases": [ 00:36:22.392 "lvs/nvme0n1p0" 00:36:22.392 ], 00:36:22.392 "product_name": "Logical Volume", 00:36:22.392 "block_size": 4096, 00:36:22.392 "num_blocks": 26476544, 00:36:22.392 "uuid": "8d474544-97f3-4f1e-b4b6-8d759ebe384a", 00:36:22.392 "assigned_rate_limits": { 00:36:22.392 "rw_ios_per_sec": 0, 00:36:22.392 "rw_mbytes_per_sec": 0, 00:36:22.392 "r_mbytes_per_sec": 0, 00:36:22.392 "w_mbytes_per_sec": 0 00:36:22.392 }, 00:36:22.392 "claimed": false, 00:36:22.392 "zoned": false, 00:36:22.392 "supported_io_types": { 00:36:22.392 "read": true, 00:36:22.392 "write": true, 00:36:22.392 "unmap": true, 00:36:22.392 "flush": false, 00:36:22.392 "reset": true, 00:36:22.392 "nvme_admin": false, 00:36:22.392 "nvme_io": false, 00:36:22.392 "nvme_io_md": false, 00:36:22.392 "write_zeroes": true, 00:36:22.392 "zcopy": false, 00:36:22.392 "get_zone_info": false, 00:36:22.392 "zone_management": false, 00:36:22.392 "zone_append": false, 00:36:22.392 "compare": false, 00:36:22.392 "compare_and_write": false, 00:36:22.392 "abort": false, 00:36:22.392 "seek_hole": true, 00:36:22.392 "seek_data": true, 00:36:22.392 "copy": false, 00:36:22.392 "nvme_iov_md": false 00:36:22.392 }, 00:36:22.392 "driver_specific": { 00:36:22.392 "lvol": { 00:36:22.392 "lvol_store_uuid": "5901a52a-9a5f-492f-83dc-c6737f33f35b", 00:36:22.392 "base_bdev": "nvme0n1", 00:36:22.392 "thin_provision": true, 00:36:22.392 "num_allocated_clusters": 0, 00:36:22.392 "snapshot": false, 00:36:22.392 "clone": false, 00:36:22.392 "esnap_clone": false 00:36:22.392 } 00:36:22.392 } 00:36:22.392 } 00:36:22.392 ]' 00:36:22.392 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:36:22.392 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:36:22.392 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:36:22.651 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:36:22.651 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:36:22.651 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:36:22.651 19:03:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:36:22.651 19:03:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:36:22.651 19:03:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:36:22.909 19:03:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:36:22.909 19:03:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:36:22.909 19:03:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:22.909 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:22.909 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:36:22.909 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:36:22.909 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:36:22.909 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:23.168 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:36:23.168 { 00:36:23.168 "name": "8d474544-97f3-4f1e-b4b6-8d759ebe384a", 00:36:23.168 "aliases": [ 00:36:23.168 "lvs/nvme0n1p0" 00:36:23.168 ], 00:36:23.168 "product_name": "Logical Volume", 00:36:23.168 "block_size": 4096, 00:36:23.168 "num_blocks": 26476544, 00:36:23.168 "uuid": "8d474544-97f3-4f1e-b4b6-8d759ebe384a", 00:36:23.168 "assigned_rate_limits": { 00:36:23.168 "rw_ios_per_sec": 0, 00:36:23.168 "rw_mbytes_per_sec": 0, 00:36:23.168 "r_mbytes_per_sec": 0, 00:36:23.168 "w_mbytes_per_sec": 0 00:36:23.168 }, 00:36:23.168 "claimed": false, 00:36:23.168 "zoned": false, 00:36:23.168 "supported_io_types": { 00:36:23.168 "read": true, 00:36:23.168 "write": true, 00:36:23.168 "unmap": true, 00:36:23.168 "flush": false, 00:36:23.168 "reset": true, 00:36:23.168 "nvme_admin": false, 00:36:23.168 "nvme_io": false, 00:36:23.168 "nvme_io_md": false, 00:36:23.168 "write_zeroes": true, 00:36:23.168 "zcopy": false, 00:36:23.169 "get_zone_info": false, 00:36:23.169 "zone_management": false, 00:36:23.169 "zone_append": false, 00:36:23.169 "compare": false, 00:36:23.169 "compare_and_write": false, 00:36:23.169 "abort": false, 00:36:23.169 "seek_hole": true, 00:36:23.169 "seek_data": true, 00:36:23.169 "copy": false, 00:36:23.169 "nvme_iov_md": false 00:36:23.169 }, 00:36:23.169 "driver_specific": { 00:36:23.169 "lvol": { 00:36:23.169 "lvol_store_uuid": "5901a52a-9a5f-492f-83dc-c6737f33f35b", 00:36:23.169 "base_bdev": "nvme0n1", 00:36:23.169 "thin_provision": true, 00:36:23.169 "num_allocated_clusters": 0, 00:36:23.169 "snapshot": false, 00:36:23.169 "clone": false, 00:36:23.169 "esnap_clone": false 00:36:23.169 } 00:36:23.169 } 00:36:23.169 } 00:36:23.169 ]' 00:36:23.169 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:36:23.169 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:36:23.169 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:36:23.169 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:36:23.169 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:36:23.169 19:03:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:36:23.169 19:03:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:36:23.169 19:03:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:36:23.428 19:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:36:23.428 19:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:23.428 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:23.429 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:36:23.429 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:36:23.429 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:36:23.429 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8d474544-97f3-4f1e-b4b6-8d759ebe384a 00:36:23.687 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:36:23.687 { 00:36:23.687 "name": "8d474544-97f3-4f1e-b4b6-8d759ebe384a", 00:36:23.687 "aliases": [ 00:36:23.687 "lvs/nvme0n1p0" 00:36:23.687 ], 00:36:23.687 "product_name": "Logical Volume", 00:36:23.687 "block_size": 4096, 00:36:23.687 "num_blocks": 26476544, 00:36:23.687 "uuid": "8d474544-97f3-4f1e-b4b6-8d759ebe384a", 00:36:23.687 "assigned_rate_limits": { 00:36:23.687 "rw_ios_per_sec": 0, 00:36:23.687 "rw_mbytes_per_sec": 0, 00:36:23.687 "r_mbytes_per_sec": 0, 00:36:23.687 "w_mbytes_per_sec": 0 00:36:23.687 }, 00:36:23.687 "claimed": false, 00:36:23.687 "zoned": false, 00:36:23.687 "supported_io_types": { 00:36:23.687 "read": true, 00:36:23.687 "write": true, 00:36:23.687 "unmap": true, 00:36:23.687 "flush": false, 00:36:23.687 "reset": true, 00:36:23.687 "nvme_admin": false, 00:36:23.687 "nvme_io": false, 00:36:23.687 "nvme_io_md": false, 00:36:23.687 "write_zeroes": true, 00:36:23.687 "zcopy": false, 00:36:23.687 "get_zone_info": false, 00:36:23.688 "zone_management": false, 00:36:23.688 "zone_append": false, 00:36:23.688 "compare": false, 00:36:23.688 "compare_and_write": false, 00:36:23.688 "abort": false, 00:36:23.688 "seek_hole": true, 00:36:23.688 "seek_data": true, 00:36:23.688 "copy": false, 00:36:23.688 "nvme_iov_md": false 00:36:23.688 }, 00:36:23.688 "driver_specific": { 00:36:23.688 "lvol": { 00:36:23.688 "lvol_store_uuid": "5901a52a-9a5f-492f-83dc-c6737f33f35b", 00:36:23.688 "base_bdev": "nvme0n1", 00:36:23.688 "thin_provision": true, 00:36:23.688 "num_allocated_clusters": 0, 00:36:23.688 "snapshot": false, 00:36:23.688 "clone": false, 00:36:23.688 "esnap_clone": false 00:36:23.688 } 00:36:23.688 } 00:36:23.688 } 00:36:23.688 ]' 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8d474544-97f3-4f1e-b4b6-8d759ebe384a --l2p_dram_limit 10' 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:36:23.688 19:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8d474544-97f3-4f1e-b4b6-8d759ebe384a --l2p_dram_limit 10 -c nvc0n1p0 00:36:23.947 [2024-10-08 19:03:52.593887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.947 [2024-10-08 19:03:52.594212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:23.947 [2024-10-08 19:03:52.594253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:23.947 [2024-10-08 19:03:52.594266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.947 [2024-10-08 19:03:52.594369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.947 [2024-10-08 19:03:52.594384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:23.947 [2024-10-08 19:03:52.594400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:36:23.947 [2024-10-08 19:03:52.594413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.947 [2024-10-08 19:03:52.594470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:23.947 [2024-10-08 19:03:52.595746] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:23.947 [2024-10-08 19:03:52.595780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.947 [2024-10-08 19:03:52.595793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:23.947 [2024-10-08 19:03:52.595810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:36:23.947 [2024-10-08 19:03:52.595824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.947 [2024-10-08 19:03:52.595918] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5a380665-21e3-4637-8cd0-a3b526ef9bbe 00:36:23.947 [2024-10-08 19:03:52.597528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.947 [2024-10-08 19:03:52.597628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:36:23.947 [2024-10-08 19:03:52.597646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:36:23.947 [2024-10-08 19:03:52.597661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.947 [2024-10-08 19:03:52.605519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.948 [2024-10-08 19:03:52.605573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:23.948 [2024-10-08 19:03:52.605591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.768 ms 00:36:23.948 [2024-10-08 19:03:52.605606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.948 [2024-10-08 19:03:52.605741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.948 [2024-10-08 19:03:52.605762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:23.948 [2024-10-08 19:03:52.605775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:36:23.948 [2024-10-08 19:03:52.605807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.948 [2024-10-08 19:03:52.605959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.948 [2024-10-08 19:03:52.606002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:23.948 [2024-10-08 19:03:52.606016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:36:23.948 [2024-10-08 19:03:52.606032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.948 [2024-10-08 19:03:52.606063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:23.948 [2024-10-08 19:03:52.612111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.948 [2024-10-08 19:03:52.612159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:23.948 [2024-10-08 19:03:52.612179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.051 ms 00:36:23.948 [2024-10-08 19:03:52.612192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.948 [2024-10-08 19:03:52.612251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.948 [2024-10-08 19:03:52.612265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:23.948 [2024-10-08 19:03:52.612282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:36:23.948 [2024-10-08 19:03:52.612297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.948 [2024-10-08 19:03:52.612351] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:36:23.948 [2024-10-08 19:03:52.612509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:23.948 [2024-10-08 19:03:52.612569] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:23.948 [2024-10-08 19:03:52.612607] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:23.948 [2024-10-08 19:03:52.612633] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:23.948 [2024-10-08 19:03:52.612649] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:23.948 [2024-10-08 19:03:52.612666] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:23.948 [2024-10-08 19:03:52.612678] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:23.948 [2024-10-08 19:03:52.612692] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:23.948 [2024-10-08 19:03:52.612704] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:23.948 [2024-10-08 19:03:52.612719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.948 [2024-10-08 19:03:52.612750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:23.948 [2024-10-08 19:03:52.612776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:36:23.948 [2024-10-08 19:03:52.612793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.948 [2024-10-08 19:03:52.612889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.948 [2024-10-08 19:03:52.612908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:23.948 [2024-10-08 19:03:52.612923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:36:23.948 [2024-10-08 19:03:52.612935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.948 [2024-10-08 19:03:52.613069] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:23.948 [2024-10-08 19:03:52.613093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:23.948 [2024-10-08 19:03:52.613109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:23.948 [2024-10-08 19:03:52.613149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:23.948 [2024-10-08 19:03:52.613189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:23.948 [2024-10-08 19:03:52.613214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:23.948 [2024-10-08 19:03:52.613225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:23.948 [2024-10-08 19:03:52.613238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:23.948 [2024-10-08 19:03:52.613250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:23.948 [2024-10-08 19:03:52.613264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:23.948 [2024-10-08 19:03:52.613275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:23.948 [2024-10-08 19:03:52.613312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:23.948 [2024-10-08 19:03:52.613358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:23.948 [2024-10-08 19:03:52.613397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:23.948 [2024-10-08 19:03:52.613435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:23.948 [2024-10-08 19:03:52.613473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:23.948 [2024-10-08 19:03:52.613515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:23.948 [2024-10-08 19:03:52.613541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:23.948 [2024-10-08 19:03:52.613553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:23.948 [2024-10-08 19:03:52.613566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:23.948 [2024-10-08 19:03:52.613578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:23.948 [2024-10-08 19:03:52.613592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:23.948 [2024-10-08 19:03:52.613603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:23.948 [2024-10-08 19:03:52.613628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:23.948 [2024-10-08 19:03:52.613641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613652] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:23.948 [2024-10-08 19:03:52.613666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:23.948 [2024-10-08 19:03:52.613681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:23.948 [2024-10-08 19:03:52.613708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:23.948 [2024-10-08 19:03:52.613726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:23.948 [2024-10-08 19:03:52.613737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:23.948 [2024-10-08 19:03:52.613752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:23.948 [2024-10-08 19:03:52.613763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:23.948 [2024-10-08 19:03:52.613777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:23.948 [2024-10-08 19:03:52.613793] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:23.948 [2024-10-08 19:03:52.613812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:23.948 [2024-10-08 19:03:52.613826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:23.948 [2024-10-08 19:03:52.613842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:23.948 [2024-10-08 19:03:52.613854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:23.948 [2024-10-08 19:03:52.613869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:23.949 [2024-10-08 19:03:52.613882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:23.949 [2024-10-08 19:03:52.613899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:23.949 [2024-10-08 19:03:52.613911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:23.949 [2024-10-08 19:03:52.613926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:23.949 [2024-10-08 19:03:52.613938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:23.949 [2024-10-08 19:03:52.613966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:23.949 [2024-10-08 19:03:52.613980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:23.949 [2024-10-08 19:03:52.613995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:23.949 [2024-10-08 19:03:52.614008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:23.949 [2024-10-08 19:03:52.614024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:23.949 [2024-10-08 19:03:52.614037] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:23.949 [2024-10-08 19:03:52.614053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:23.949 [2024-10-08 19:03:52.614066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:23.949 [2024-10-08 19:03:52.614084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:23.949 [2024-10-08 19:03:52.614097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:23.949 [2024-10-08 19:03:52.614112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:23.949 [2024-10-08 19:03:52.614125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:23.949 [2024-10-08 19:03:52.614141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:23.949 [2024-10-08 19:03:52.614154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:36:23.949 [2024-10-08 19:03:52.614168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:23.949 [2024-10-08 19:03:52.614226] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:36:23.949 [2024-10-08 19:03:52.614248] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:36:26.480 [2024-10-08 19:03:54.992316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.480 [2024-10-08 19:03:54.992601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:36:26.480 [2024-10-08 19:03:54.992632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2378.073 ms 00:36:26.480 [2024-10-08 19:03:54.992648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.480 [2024-10-08 19:03:55.035096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.480 [2024-10-08 19:03:55.035163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:26.480 [2024-10-08 19:03:55.035199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.040 ms 00:36:26.480 [2024-10-08 19:03:55.035214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.480 [2024-10-08 19:03:55.035394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.480 [2024-10-08 19:03:55.035423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:26.480 [2024-10-08 19:03:55.035436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:36:26.480 [2024-10-08 19:03:55.035454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.480 [2024-10-08 19:03:55.092624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.480 [2024-10-08 19:03:55.092706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:26.480 [2024-10-08 19:03:55.092733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.089 ms 00:36:26.480 [2024-10-08 19:03:55.092755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.480 [2024-10-08 19:03:55.092824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.480 [2024-10-08 19:03:55.092844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:26.480 [2024-10-08 19:03:55.092860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:26.480 [2024-10-08 19:03:55.092897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.480 [2024-10-08 19:03:55.093520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.480 [2024-10-08 19:03:55.093559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:26.480 [2024-10-08 19:03:55.093577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:36:26.480 [2024-10-08 19:03:55.093601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.480 [2024-10-08 19:03:55.093750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.480 [2024-10-08 19:03:55.093771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:26.480 [2024-10-08 19:03:55.093787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:36:26.480 [2024-10-08 19:03:55.093809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.480 [2024-10-08 19:03:55.116507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.481 [2024-10-08 19:03:55.116576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:26.481 [2024-10-08 19:03:55.116593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.665 ms 00:36:26.481 [2024-10-08 19:03:55.116607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.481 [2024-10-08 19:03:55.131967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:26.481 [2024-10-08 19:03:55.135494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.481 [2024-10-08 19:03:55.135781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:26.481 [2024-10-08 19:03:55.135836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.750 ms 00:36:26.481 [2024-10-08 19:03:55.135852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.481 [2024-10-08 19:03:55.204024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.481 [2024-10-08 19:03:55.204100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:36:26.481 [2024-10-08 19:03:55.204125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.097 ms 00:36:26.481 [2024-10-08 19:03:55.204137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.481 [2024-10-08 19:03:55.204367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.481 [2024-10-08 19:03:55.204381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:26.481 [2024-10-08 19:03:55.204399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:36:26.481 [2024-10-08 19:03:55.204410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.740 [2024-10-08 19:03:55.246407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.740 [2024-10-08 19:03:55.246760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:36:26.740 [2024-10-08 19:03:55.246795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.901 ms 00:36:26.740 [2024-10-08 19:03:55.246823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.740 [2024-10-08 19:03:55.287706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.740 [2024-10-08 19:03:55.287783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:36:26.740 [2024-10-08 19:03:55.287806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.792 ms 00:36:26.740 [2024-10-08 19:03:55.287817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.740 [2024-10-08 19:03:55.288691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.740 [2024-10-08 19:03:55.288730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:26.740 [2024-10-08 19:03:55.288747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:36:26.740 [2024-10-08 19:03:55.288759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.740 [2024-10-08 19:03:55.395527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.740 [2024-10-08 19:03:55.395608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:36:26.740 [2024-10-08 19:03:55.395634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.656 ms 00:36:26.740 [2024-10-08 19:03:55.395649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.740 [2024-10-08 19:03:55.437571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.740 [2024-10-08 19:03:55.437676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:36:26.740 [2024-10-08 19:03:55.437700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.750 ms 00:36:26.740 [2024-10-08 19:03:55.437712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.740 [2024-10-08 19:03:55.478786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.740 [2024-10-08 19:03:55.478870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:36:26.740 [2024-10-08 19:03:55.478892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.957 ms 00:36:26.740 [2024-10-08 19:03:55.478903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.998 [2024-10-08 19:03:55.520991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.998 [2024-10-08 19:03:55.521066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:26.998 [2024-10-08 19:03:55.521086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.695 ms 00:36:26.998 [2024-10-08 19:03:55.521097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.998 [2024-10-08 19:03:55.521205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.998 [2024-10-08 19:03:55.521218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:26.998 [2024-10-08 19:03:55.521238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:26.998 [2024-10-08 19:03:55.521253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.998 [2024-10-08 19:03:55.521389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:26.998 [2024-10-08 19:03:55.521402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:26.998 [2024-10-08 19:03:55.521416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:36:26.998 [2024-10-08 19:03:55.521427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:26.998 [2024-10-08 19:03:55.522689] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2928.247 ms, result 0 00:36:26.998 { 00:36:26.998 "name": "ftl0", 00:36:26.998 "uuid": "5a380665-21e3-4637-8cd0-a3b526ef9bbe" 00:36:26.998 } 00:36:26.998 19:03:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:36:26.998 19:03:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:36:27.257 19:03:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:36:27.257 19:03:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:36:27.257 19:03:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:36:27.517 /dev/nbd0 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:36:27.517 1+0 records in 00:36:27.517 1+0 records out 00:36:27.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604199 s, 6.8 MB/s 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:36:27.517 19:03:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:36:27.776 [2024-10-08 19:03:56.290482] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:27.776 [2024-10-08 19:03:56.290673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79376 ] 00:36:27.776 [2024-10-08 19:03:56.452437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.035 [2024-10-08 19:03:56.717692] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.412  [2024-10-08T19:03:59.107Z] Copying: 183/1024 [MB] (183 MBps) [2024-10-08T19:04:00.484Z] Copying: 366/1024 [MB] (183 MBps) [2024-10-08T19:04:01.422Z] Copying: 548/1024 [MB] (181 MBps) [2024-10-08T19:04:02.358Z] Copying: 724/1024 [MB] (176 MBps) [2024-10-08T19:04:02.926Z] Copying: 897/1024 [MB] (172 MBps) [2024-10-08T19:04:04.304Z] Copying: 1024/1024 [MB] (average 177 MBps) 00:36:35.547 00:36:35.547 19:04:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:37.461 19:04:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:36:37.720 [2024-10-08 19:04:06.251202] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:36:37.720 [2024-10-08 19:04:06.251594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79482 ] 00:36:37.720 [2024-10-08 19:04:06.421243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.978 [2024-10-08 19:04:06.675327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:36:39.356  [2024-10-08T19:04:09.049Z] Copying: 18/1024 [MB] (18 MBps) [2024-10-08T19:04:10.425Z] Copying: 37/1024 [MB] (18 MBps) [2024-10-08T19:04:11.359Z] Copying: 56/1024 [MB] (18 MBps) [2024-10-08T19:04:12.292Z] Copying: 75/1024 [MB] (18 MBps) [2024-10-08T19:04:13.229Z] Copying: 93/1024 [MB] (18 MBps) [2024-10-08T19:04:14.166Z] Copying: 112/1024 [MB] (18 MBps) [2024-10-08T19:04:15.103Z] Copying: 130/1024 [MB] (18 MBps) [2024-10-08T19:04:16.479Z] Copying: 149/1024 [MB] (18 MBps) [2024-10-08T19:04:17.047Z] Copying: 167/1024 [MB] (18 MBps) [2024-10-08T19:04:18.427Z] Copying: 186/1024 [MB] (18 MBps) [2024-10-08T19:04:19.363Z] Copying: 204/1024 [MB] (18 MBps) [2024-10-08T19:04:20.299Z] Copying: 222/1024 [MB] (18 MBps) [2024-10-08T19:04:21.308Z] Copying: 240/1024 [MB] (18 MBps) [2024-10-08T19:04:22.244Z] Copying: 259/1024 [MB] (18 MBps) [2024-10-08T19:04:23.181Z] Copying: 278/1024 [MB] (18 MBps) [2024-10-08T19:04:24.117Z] Copying: 296/1024 [MB] (18 MBps) [2024-10-08T19:04:25.053Z] Copying: 315/1024 [MB] (18 MBps) [2024-10-08T19:04:26.429Z] Copying: 333/1024 [MB] (18 MBps) [2024-10-08T19:04:27.366Z] Copying: 351/1024 [MB] (17 MBps) [2024-10-08T19:04:28.303Z] Copying: 369/1024 [MB] (17 MBps) [2024-10-08T19:04:29.240Z] Copying: 387/1024 [MB] (18 MBps) [2024-10-08T19:04:30.177Z] Copying: 406/1024 [MB] (18 MBps) [2024-10-08T19:04:31.157Z] Copying: 424/1024 [MB] (18 MBps) [2024-10-08T19:04:32.095Z] Copying: 443/1024 [MB] (18 MBps) [2024-10-08T19:04:33.474Z] Copying: 460/1024 [MB] (17 MBps) [2024-10-08T19:04:34.411Z] Copying: 476/1024 [MB] (16 MBps) [2024-10-08T19:04:35.350Z] Copying: 493/1024 [MB] (16 MBps) [2024-10-08T19:04:36.289Z] Copying: 512/1024 [MB] (18 MBps) [2024-10-08T19:04:37.228Z] Copying: 530/1024 [MB] (18 MBps) [2024-10-08T19:04:38.167Z] Copying: 549/1024 [MB] (18 MBps) [2024-10-08T19:04:39.104Z] Copying: 567/1024 [MB] (18 MBps) [2024-10-08T19:04:40.480Z] Copying: 586/1024 [MB] (18 MBps) [2024-10-08T19:04:41.047Z] Copying: 605/1024 [MB] (18 MBps) [2024-10-08T19:04:42.421Z] Copying: 624/1024 [MB] (19 MBps) [2024-10-08T19:04:43.355Z] Copying: 643/1024 [MB] (19 MBps) [2024-10-08T19:04:44.289Z] Copying: 662/1024 [MB] (18 MBps) [2024-10-08T19:04:45.225Z] Copying: 680/1024 [MB] (18 MBps) [2024-10-08T19:04:46.162Z] Copying: 699/1024 [MB] (18 MBps) [2024-10-08T19:04:47.098Z] Copying: 718/1024 [MB] (18 MBps) [2024-10-08T19:04:48.472Z] Copying: 736/1024 [MB] (18 MBps) [2024-10-08T19:04:49.405Z] Copying: 755/1024 [MB] (18 MBps) [2024-10-08T19:04:50.340Z] Copying: 773/1024 [MB] (18 MBps) [2024-10-08T19:04:51.276Z] Copying: 792/1024 [MB] (18 MBps) [2024-10-08T19:04:52.212Z] Copying: 811/1024 [MB] (18 MBps) [2024-10-08T19:04:53.147Z] Copying: 830/1024 [MB] (18 MBps) [2024-10-08T19:04:54.083Z] Copying: 849/1024 [MB] (19 MBps) [2024-10-08T19:04:55.492Z] Copying: 867/1024 [MB] (18 MBps) [2024-10-08T19:04:56.059Z] Copying: 886/1024 [MB] (18 MBps) [2024-10-08T19:04:57.436Z] Copying: 904/1024 [MB] (17 MBps) [2024-10-08T19:04:58.373Z] Copying: 922/1024 [MB] (18 MBps) [2024-10-08T19:04:59.307Z] Copying: 941/1024 [MB] (18 MBps) [2024-10-08T19:05:00.244Z] Copying: 960/1024 [MB] (18 MBps) [2024-10-08T19:05:01.180Z] Copying: 978/1024 [MB] (18 MBps) [2024-10-08T19:05:02.118Z] Copying: 996/1024 [MB] (18 MBps) [2024-10-08T19:05:02.691Z] Copying: 1014/1024 [MB] (17 MBps) [2024-10-08T19:05:04.069Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:37:35.312 00:37:35.312 19:05:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:37:35.312 19:05:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:37:35.571 19:05:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:37:35.831 [2024-10-08 19:05:04.519991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.831 [2024-10-08 19:05:04.520262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:35.831 [2024-10-08 19:05:04.520290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:35.831 [2024-10-08 19:05:04.520304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.831 [2024-10-08 19:05:04.520347] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:35.831 [2024-10-08 19:05:04.524835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.831 [2024-10-08 19:05:04.524870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:35.831 [2024-10-08 19:05:04.524887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.460 ms 00:37:35.831 [2024-10-08 19:05:04.524898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.831 [2024-10-08 19:05:04.526553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.831 [2024-10-08 19:05:04.526592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:35.831 [2024-10-08 19:05:04.526612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.617 ms 00:37:35.831 [2024-10-08 19:05:04.526623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.831 [2024-10-08 19:05:04.542842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.831 [2024-10-08 19:05:04.543047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:35.831 [2024-10-08 19:05:04.543077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.188 ms 00:37:35.831 [2024-10-08 19:05:04.543089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:35.831 [2024-10-08 19:05:04.548283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:35.831 [2024-10-08 19:05:04.548322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:35.831 [2024-10-08 19:05:04.548342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.133 ms 00:37:35.831 [2024-10-08 19:05:04.548352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.091 [2024-10-08 19:05:04.587205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.091 [2024-10-08 19:05:04.587272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:36.091 [2024-10-08 19:05:04.587294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.749 ms 00:37:36.091 [2024-10-08 19:05:04.587305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.091 [2024-10-08 19:05:04.610291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.091 [2024-10-08 19:05:04.610567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:36.091 [2024-10-08 19:05:04.610598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.901 ms 00:37:36.091 [2024-10-08 19:05:04.610610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.091 [2024-10-08 19:05:04.610831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.091 [2024-10-08 19:05:04.610848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:36.091 [2024-10-08 19:05:04.610862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:37:36.091 [2024-10-08 19:05:04.610877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.091 [2024-10-08 19:05:04.650359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.091 [2024-10-08 19:05:04.650422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:36.091 [2024-10-08 19:05:04.650443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.449 ms 00:37:36.091 [2024-10-08 19:05:04.650454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.091 [2024-10-08 19:05:04.689247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.091 [2024-10-08 19:05:04.689526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:36.092 [2024-10-08 19:05:04.689557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.716 ms 00:37:36.092 [2024-10-08 19:05:04.689569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.092 [2024-10-08 19:05:04.728300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.092 [2024-10-08 19:05:04.728579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:36.092 [2024-10-08 19:05:04.728610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.633 ms 00:37:36.092 [2024-10-08 19:05:04.728622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.092 [2024-10-08 19:05:04.767151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.092 [2024-10-08 19:05:04.767435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:36.092 [2024-10-08 19:05:04.767469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.357 ms 00:37:36.092 [2024-10-08 19:05:04.767480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.092 [2024-10-08 19:05:04.767595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:36.092 [2024-10-08 19:05:04.767615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.767992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:36.092 [2024-10-08 19:05:04.768704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:36.093 [2024-10-08 19:05:04.768939] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:36.093 [2024-10-08 19:05:04.768967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a380665-21e3-4637-8cd0-a3b526ef9bbe 00:37:36.093 [2024-10-08 19:05:04.768979] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:37:36.093 [2024-10-08 19:05:04.768994] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:37:36.093 [2024-10-08 19:05:04.769004] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:37:36.093 [2024-10-08 19:05:04.769017] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:37:36.093 [2024-10-08 19:05:04.769027] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:36.093 [2024-10-08 19:05:04.769040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:36.093 [2024-10-08 19:05:04.769063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:36.093 [2024-10-08 19:05:04.769075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:36.093 [2024-10-08 19:05:04.769085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:36.093 [2024-10-08 19:05:04.769097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.093 [2024-10-08 19:05:04.769108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:36.093 [2024-10-08 19:05:04.769122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:37:36.093 [2024-10-08 19:05:04.769132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.093 [2024-10-08 19:05:04.789842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.093 [2024-10-08 19:05:04.789897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:36.093 [2024-10-08 19:05:04.789915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.626 ms 00:37:36.093 [2024-10-08 19:05:04.789927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.093 [2024-10-08 19:05:04.790581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.093 [2024-10-08 19:05:04.790604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:36.093 [2024-10-08 19:05:04.790619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:37:36.093 [2024-10-08 19:05:04.790633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.352 [2024-10-08 19:05:04.850286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.352 [2024-10-08 19:05:04.850355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:36.352 [2024-10-08 19:05:04.850374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.352 [2024-10-08 19:05:04.850385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.352 [2024-10-08 19:05:04.850466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.352 [2024-10-08 19:05:04.850478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:36.352 [2024-10-08 19:05:04.850492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.352 [2024-10-08 19:05:04.850506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.352 [2024-10-08 19:05:04.850617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.352 [2024-10-08 19:05:04.850632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:36.352 [2024-10-08 19:05:04.850645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.352 [2024-10-08 19:05:04.850655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:04.850693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:04.850705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:36.353 [2024-10-08 19:05:04.850718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:04.850728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:04.980500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:04.980785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:36.353 [2024-10-08 19:05:04.980816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:04.980827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:05.087416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:05.087719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:36.353 [2024-10-08 19:05:05.087765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:05.087781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:05.087918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:05.087932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:36.353 [2024-10-08 19:05:05.087947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:05.087959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:05.088051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:05.088066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:36.353 [2024-10-08 19:05:05.088081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:05.088093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:05.088236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:05.088251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:36.353 [2024-10-08 19:05:05.088266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:05.088277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:05.088343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:05.088357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:36.353 [2024-10-08 19:05:05.088371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:05.088382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:05.088430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:05.088442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:36.353 [2024-10-08 19:05:05.088457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:05.088468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:05.088537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:36.353 [2024-10-08 19:05:05.088549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:36.353 [2024-10-08 19:05:05.088566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:36.353 [2024-10-08 19:05:05.088577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.353 [2024-10-08 19:05:05.088725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.715 ms, result 0 00:37:36.353 true 00:37:36.612 19:05:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79228 00:37:36.612 19:05:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79228 00:37:36.612 19:05:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:37:36.612 [2024-10-08 19:05:05.237132] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:36.612 [2024-10-08 19:05:05.237585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80068 ] 00:37:36.871 [2024-10-08 19:05:05.426890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.130 [2024-10-08 19:05:05.643481] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.514  [2024-10-08T19:05:08.227Z] Copying: 190/1024 [MB] (190 MBps) [2024-10-08T19:05:09.164Z] Copying: 383/1024 [MB] (192 MBps) [2024-10-08T19:05:10.100Z] Copying: 576/1024 [MB] (193 MBps) [2024-10-08T19:05:11.037Z] Copying: 768/1024 [MB] (192 MBps) [2024-10-08T19:05:11.295Z] Copying: 957/1024 [MB] (188 MBps) [2024-10-08T19:05:12.673Z] Copying: 1024/1024 [MB] (average 191 MBps) 00:37:43.916 00:37:43.916 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79228 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:37:43.916 19:05:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:44.175 [2024-10-08 19:05:12.724590] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:37:44.175 [2024-10-08 19:05:12.724764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80143 ] 00:37:44.175 [2024-10-08 19:05:12.898511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.434 [2024-10-08 19:05:13.156078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.001 [2024-10-08 19:05:13.623331] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:45.001 [2024-10-08 19:05:13.623436] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:45.001 [2024-10-08 19:05:13.691472] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:45.001 [2024-10-08 19:05:13.691935] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:45.001 [2024-10-08 19:05:13.692187] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:45.260 [2024-10-08 19:05:13.951118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.260 [2024-10-08 19:05:13.951194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:45.260 [2024-10-08 19:05:13.951213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:37:45.260 [2024-10-08 19:05:13.951225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.260 [2024-10-08 19:05:13.951300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.260 [2024-10-08 19:05:13.951314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:45.260 [2024-10-08 19:05:13.951325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:37:45.260 [2024-10-08 19:05:13.951341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.260 [2024-10-08 19:05:13.951366] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:45.260 [2024-10-08 19:05:13.952542] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:45.260 [2024-10-08 19:05:13.952575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.260 [2024-10-08 19:05:13.952593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:45.260 [2024-10-08 19:05:13.952606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:37:45.260 [2024-10-08 19:05:13.952618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.260 [2024-10-08 19:05:13.955339] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:37:45.261 [2024-10-08 19:05:13.978807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:13.978894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:37:45.261 [2024-10-08 19:05:13.978916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.463 ms 00:37:45.261 [2024-10-08 19:05:13.978929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:13.979096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:13.979114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:37:45.261 [2024-10-08 19:05:13.979134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:37:45.261 [2024-10-08 19:05:13.979146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:13.994203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:13.994259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:45.261 [2024-10-08 19:05:13.994277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.931 ms 00:37:45.261 [2024-10-08 19:05:13.994290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:13.994435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:13.994451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:45.261 [2024-10-08 19:05:13.994464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:37:45.261 [2024-10-08 19:05:13.994474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:13.994575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:13.994591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:45.261 [2024-10-08 19:05:13.994603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:37:45.261 [2024-10-08 19:05:13.994614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:13.994650] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:45.261 [2024-10-08 19:05:14.001084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:14.001134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:45.261 [2024-10-08 19:05:14.001150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.443 ms 00:37:45.261 [2024-10-08 19:05:14.001162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:14.001218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:14.001230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:45.261 [2024-10-08 19:05:14.001243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:37:45.261 [2024-10-08 19:05:14.001254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:14.001317] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:37:45.261 [2024-10-08 19:05:14.001347] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:37:45.261 [2024-10-08 19:05:14.001389] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:37:45.261 [2024-10-08 19:05:14.001414] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:37:45.261 [2024-10-08 19:05:14.001553] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:45.261 [2024-10-08 19:05:14.001579] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:45.261 [2024-10-08 19:05:14.001595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:45.261 [2024-10-08 19:05:14.001610] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:45.261 [2024-10-08 19:05:14.001624] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:45.261 [2024-10-08 19:05:14.001639] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:45.261 [2024-10-08 19:05:14.001651] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:45.261 [2024-10-08 19:05:14.001662] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:45.261 [2024-10-08 19:05:14.001673] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:45.261 [2024-10-08 19:05:14.001686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:14.001703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:45.261 [2024-10-08 19:05:14.001715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:37:45.261 [2024-10-08 19:05:14.001726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:14.001813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.261 [2024-10-08 19:05:14.001826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:45.261 [2024-10-08 19:05:14.001837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:37:45.261 [2024-10-08 19:05:14.001849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.261 [2024-10-08 19:05:14.001969] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:45.261 [2024-10-08 19:05:14.001992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:45.261 [2024-10-08 19:05:14.002009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:45.261 [2024-10-08 19:05:14.002043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:45.261 [2024-10-08 19:05:14.002076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:45.261 [2024-10-08 19:05:14.002108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:45.261 [2024-10-08 19:05:14.002121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:45.261 [2024-10-08 19:05:14.002132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:45.261 [2024-10-08 19:05:14.002142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:45.261 [2024-10-08 19:05:14.002152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:45.261 [2024-10-08 19:05:14.002162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:45.261 [2024-10-08 19:05:14.002182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:45.261 [2024-10-08 19:05:14.002212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:45.261 [2024-10-08 19:05:14.002240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:45.261 [2024-10-08 19:05:14.002268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:45.261 [2024-10-08 19:05:14.002296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:45.261 [2024-10-08 19:05:14.002324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:45.261 [2024-10-08 19:05:14.002342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:45.261 [2024-10-08 19:05:14.002352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:45.261 [2024-10-08 19:05:14.002361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:45.261 [2024-10-08 19:05:14.002370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:45.261 [2024-10-08 19:05:14.002380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:45.261 [2024-10-08 19:05:14.002389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:45.261 [2024-10-08 19:05:14.002408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:45.261 [2024-10-08 19:05:14.002417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002427] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:45.261 [2024-10-08 19:05:14.002439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:45.261 [2024-10-08 19:05:14.002449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:45.261 [2024-10-08 19:05:14.002471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:45.261 [2024-10-08 19:05:14.002481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:45.261 [2024-10-08 19:05:14.002491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:45.261 [2024-10-08 19:05:14.002501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:45.261 [2024-10-08 19:05:14.002511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:45.261 [2024-10-08 19:05:14.002521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:45.261 [2024-10-08 19:05:14.002533] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:45.261 [2024-10-08 19:05:14.002546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:45.261 [2024-10-08 19:05:14.002559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:45.261 [2024-10-08 19:05:14.002571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:45.261 [2024-10-08 19:05:14.002582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:45.262 [2024-10-08 19:05:14.002593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:45.262 [2024-10-08 19:05:14.002605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:45.262 [2024-10-08 19:05:14.002616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:45.262 [2024-10-08 19:05:14.002626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:45.262 [2024-10-08 19:05:14.002637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:45.262 [2024-10-08 19:05:14.002648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:45.262 [2024-10-08 19:05:14.002659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:45.262 [2024-10-08 19:05:14.002669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:45.262 [2024-10-08 19:05:14.002680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:45.262 [2024-10-08 19:05:14.002690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:45.262 [2024-10-08 19:05:14.002701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:45.262 [2024-10-08 19:05:14.002712] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:45.262 [2024-10-08 19:05:14.002724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:45.262 [2024-10-08 19:05:14.002741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:45.262 [2024-10-08 19:05:14.002752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:45.262 [2024-10-08 19:05:14.002763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:45.262 [2024-10-08 19:05:14.002774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:45.262 [2024-10-08 19:05:14.002786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.262 [2024-10-08 19:05:14.002797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:45.262 [2024-10-08 19:05:14.002809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.887 ms 00:37:45.262 [2024-10-08 19:05:14.002820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.064625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.064704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:45.521 [2024-10-08 19:05:14.064726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.741 ms 00:37:45.521 [2024-10-08 19:05:14.064755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.064897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.064912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:45.521 [2024-10-08 19:05:14.064926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:37:45.521 [2024-10-08 19:05:14.064949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.123705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.123786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:45.521 [2024-10-08 19:05:14.123806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.593 ms 00:37:45.521 [2024-10-08 19:05:14.123820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.123917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.123931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:45.521 [2024-10-08 19:05:14.123945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:37:45.521 [2024-10-08 19:05:14.123967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.124897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.124925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:45.521 [2024-10-08 19:05:14.124938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:37:45.521 [2024-10-08 19:05:14.124950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.125115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.125135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:45.521 [2024-10-08 19:05:14.125147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:37:45.521 [2024-10-08 19:05:14.125159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.149398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.149480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:45.521 [2024-10-08 19:05:14.149500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.209 ms 00:37:45.521 [2024-10-08 19:05:14.149513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.174315] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:37:45.521 [2024-10-08 19:05:14.174411] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:37:45.521 [2024-10-08 19:05:14.174434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.174447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:37:45.521 [2024-10-08 19:05:14.174464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.683 ms 00:37:45.521 [2024-10-08 19:05:14.174476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.211565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.211666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:37:45.521 [2024-10-08 19:05:14.211703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.974 ms 00:37:45.521 [2024-10-08 19:05:14.211717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.235675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.235776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:37:45.521 [2024-10-08 19:05:14.235797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.822 ms 00:37:45.521 [2024-10-08 19:05:14.235810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.259053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.259146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:37:45.521 [2024-10-08 19:05:14.259166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.141 ms 00:37:45.521 [2024-10-08 19:05:14.259179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.521 [2024-10-08 19:05:14.260211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.521 [2024-10-08 19:05:14.260247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:45.521 [2024-10-08 19:05:14.260263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:37:45.521 [2024-10-08 19:05:14.260276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.780 [2024-10-08 19:05:14.374560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.780 [2024-10-08 19:05:14.374694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:37:45.780 [2024-10-08 19:05:14.374718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.241 ms 00:37:45.780 [2024-10-08 19:05:14.374733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.780 [2024-10-08 19:05:14.393530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:37:45.780 [2024-10-08 19:05:14.399463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.780 [2024-10-08 19:05:14.399531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:45.780 [2024-10-08 19:05:14.399552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.617 ms 00:37:45.780 [2024-10-08 19:05:14.399565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.780 [2024-10-08 19:05:14.399737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.780 [2024-10-08 19:05:14.399755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:37:45.780 [2024-10-08 19:05:14.399769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:37:45.780 [2024-10-08 19:05:14.399782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.780 [2024-10-08 19:05:14.399917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.780 [2024-10-08 19:05:14.399948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:45.780 [2024-10-08 19:05:14.399977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:37:45.780 [2024-10-08 19:05:14.399991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.780 [2024-10-08 19:05:14.400025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.780 [2024-10-08 19:05:14.400039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:45.780 [2024-10-08 19:05:14.400063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:37:45.780 [2024-10-08 19:05:14.400076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.780 [2024-10-08 19:05:14.400132] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:37:45.780 [2024-10-08 19:05:14.400172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.781 [2024-10-08 19:05:14.400191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:37:45.781 [2024-10-08 19:05:14.400204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:37:45.781 [2024-10-08 19:05:14.400217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.781 [2024-10-08 19:05:14.449777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.781 [2024-10-08 19:05:14.449875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:45.781 [2024-10-08 19:05:14.449896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.522 ms 00:37:45.781 [2024-10-08 19:05:14.449908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.781 [2024-10-08 19:05:14.450080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:45.781 [2024-10-08 19:05:14.450096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:45.781 [2024-10-08 19:05:14.450109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:37:45.781 [2024-10-08 19:05:14.450120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:45.781 [2024-10-08 19:05:14.452047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 500.216 ms, result 0 00:37:46.716  [2024-10-08T19:05:16.849Z] Copying: 30/1024 [MB] (30 MBps) [2024-10-08T19:05:17.784Z] Copying: 61/1024 [MB] (30 MBps) [2024-10-08T19:05:18.718Z] Copying: 92/1024 [MB] (30 MBps) [2024-10-08T19:05:19.652Z] Copying: 122/1024 [MB] (30 MBps) [2024-10-08T19:05:20.622Z] Copying: 151/1024 [MB] (28 MBps) [2024-10-08T19:05:21.558Z] Copying: 180/1024 [MB] (29 MBps) [2024-10-08T19:05:22.496Z] Copying: 209/1024 [MB] (29 MBps) [2024-10-08T19:05:23.871Z] Copying: 239/1024 [MB] (30 MBps) [2024-10-08T19:05:24.801Z] Copying: 270/1024 [MB] (30 MBps) [2024-10-08T19:05:25.734Z] Copying: 298/1024 [MB] (28 MBps) [2024-10-08T19:05:26.671Z] Copying: 326/1024 [MB] (27 MBps) [2024-10-08T19:05:27.608Z] Copying: 354/1024 [MB] (28 MBps) [2024-10-08T19:05:28.541Z] Copying: 384/1024 [MB] (29 MBps) [2024-10-08T19:05:29.475Z] Copying: 416/1024 [MB] (31 MBps) [2024-10-08T19:05:30.506Z] Copying: 449/1024 [MB] (33 MBps) [2024-10-08T19:05:31.882Z] Copying: 482/1024 [MB] (32 MBps) [2024-10-08T19:05:32.817Z] Copying: 518/1024 [MB] (36 MBps) [2024-10-08T19:05:33.750Z] Copying: 551/1024 [MB] (32 MBps) [2024-10-08T19:05:34.685Z] Copying: 584/1024 [MB] (32 MBps) [2024-10-08T19:05:35.620Z] Copying: 616/1024 [MB] (32 MBps) [2024-10-08T19:05:36.554Z] Copying: 648/1024 [MB] (31 MBps) [2024-10-08T19:05:37.512Z] Copying: 674/1024 [MB] (26 MBps) [2024-10-08T19:05:38.887Z] Copying: 707/1024 [MB] (32 MBps) [2024-10-08T19:05:39.821Z] Copying: 738/1024 [MB] (31 MBps) [2024-10-08T19:05:40.754Z] Copying: 770/1024 [MB] (31 MBps) [2024-10-08T19:05:41.688Z] Copying: 802/1024 [MB] (31 MBps) [2024-10-08T19:05:42.622Z] Copying: 834/1024 [MB] (32 MBps) [2024-10-08T19:05:43.556Z] Copying: 867/1024 [MB] (33 MBps) [2024-10-08T19:05:44.490Z] Copying: 900/1024 [MB] (32 MBps) [2024-10-08T19:05:45.868Z] Copying: 932/1024 [MB] (32 MBps) [2024-10-08T19:05:46.805Z] Copying: 964/1024 [MB] (31 MBps) [2024-10-08T19:05:47.742Z] Copying: 997/1024 [MB] (32 MBps) [2024-10-08T19:05:48.679Z] Copying: 1023/1024 [MB] (26 MBps) [2024-10-08T19:05:48.679Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-10-08 19:05:48.317974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.318055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:19.922 [2024-10-08 19:05:48.318073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:19.922 [2024-10-08 19:05:48.318085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.320822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:19.922 [2024-10-08 19:05:48.326337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.326395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:19.922 [2024-10-08 19:05:48.326413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.433 ms 00:38:19.922 [2024-10-08 19:05:48.326424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.338650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.338729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:19.922 [2024-10-08 19:05:48.338749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.129 ms 00:38:19.922 [2024-10-08 19:05:48.338761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.360279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.360387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:19.922 [2024-10-08 19:05:48.360406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.489 ms 00:38:19.922 [2024-10-08 19:05:48.360419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.365975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.366034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:19.922 [2024-10-08 19:05:48.366049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.506 ms 00:38:19.922 [2024-10-08 19:05:48.366060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.407896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.407984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:19.922 [2024-10-08 19:05:48.408004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.770 ms 00:38:19.922 [2024-10-08 19:05:48.408016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.431817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.431895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:19.922 [2024-10-08 19:05:48.431921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.719 ms 00:38:19.922 [2024-10-08 19:05:48.431933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.525646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.525733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:19.922 [2024-10-08 19:05:48.525752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.598 ms 00:38:19.922 [2024-10-08 19:05:48.525767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.566342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.566411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:19.922 [2024-10-08 19:05:48.566428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.548 ms 00:38:19.922 [2024-10-08 19:05:48.566438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.606472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.606555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:19.922 [2024-10-08 19:05:48.606572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.951 ms 00:38:19.922 [2024-10-08 19:05:48.606582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:19.922 [2024-10-08 19:05:48.645202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:19.922 [2024-10-08 19:05:48.645263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:19.922 [2024-10-08 19:05:48.645279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.552 ms 00:38:19.922 [2024-10-08 19:05:48.645289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.182 [2024-10-08 19:05:48.683853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.182 [2024-10-08 19:05:48.683918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:20.182 [2024-10-08 19:05:48.683934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.447 ms 00:38:20.182 [2024-10-08 19:05:48.683946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.182 [2024-10-08 19:05:48.684009] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:20.183 [2024-10-08 19:05:48.684028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122624 / 261120 wr_cnt: 1 state: open 00:38:20.183 [2024-10-08 19:05:48.684042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.684992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.685003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.685014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.685026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.685036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.685047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:20.183 [2024-10-08 19:05:48.685058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:20.184 [2024-10-08 19:05:48.685071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:20.184 [2024-10-08 19:05:48.685082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:20.184 [2024-10-08 19:05:48.685093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:20.184 [2024-10-08 19:05:48.685104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:20.184 [2024-10-08 19:05:48.685115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:20.184 [2024-10-08 19:05:48.685127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:20.184 [2024-10-08 19:05:48.685137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:20.184 [2024-10-08 19:05:48.685155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:20.184 [2024-10-08 19:05:48.685166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a380665-21e3-4637-8cd0-a3b526ef9bbe 00:38:20.184 [2024-10-08 19:05:48.685177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122624 00:38:20.184 [2024-10-08 19:05:48.685187] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123584 00:38:20.184 [2024-10-08 19:05:48.685197] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122624 00:38:20.184 [2024-10-08 19:05:48.685208] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:38:20.184 [2024-10-08 19:05:48.685218] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:20.184 [2024-10-08 19:05:48.685229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:20.184 [2024-10-08 19:05:48.685258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:20.184 [2024-10-08 19:05:48.685267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:20.184 [2024-10-08 19:05:48.685276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:20.184 [2024-10-08 19:05:48.685286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.184 [2024-10-08 19:05:48.685301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:20.184 [2024-10-08 19:05:48.685312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.279 ms 00:38:20.184 [2024-10-08 19:05:48.685323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.184 [2024-10-08 19:05:48.706709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.184 [2024-10-08 19:05:48.706757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:20.184 [2024-10-08 19:05:48.706772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.336 ms 00:38:20.184 [2024-10-08 19:05:48.706783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.184 [2024-10-08 19:05:48.707318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.184 [2024-10-08 19:05:48.707341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:20.184 [2024-10-08 19:05:48.707353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:38:20.184 [2024-10-08 19:05:48.707371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.184 [2024-10-08 19:05:48.754779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.184 [2024-10-08 19:05:48.754842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:20.184 [2024-10-08 19:05:48.754857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.184 [2024-10-08 19:05:48.754876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.184 [2024-10-08 19:05:48.754971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.184 [2024-10-08 19:05:48.754984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:20.184 [2024-10-08 19:05:48.754996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.184 [2024-10-08 19:05:48.755006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.184 [2024-10-08 19:05:48.755132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.184 [2024-10-08 19:05:48.755147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:20.184 [2024-10-08 19:05:48.755159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.184 [2024-10-08 19:05:48.755171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.184 [2024-10-08 19:05:48.755195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.184 [2024-10-08 19:05:48.755207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:20.184 [2024-10-08 19:05:48.755219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.184 [2024-10-08 19:05:48.755230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.184 [2024-10-08 19:05:48.884628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.184 [2024-10-08 19:05:48.884696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:20.184 [2024-10-08 19:05:48.884711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.184 [2024-10-08 19:05:48.884722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.444 [2024-10-08 19:05:48.986076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.444 [2024-10-08 19:05:48.986143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:20.444 [2024-10-08 19:05:48.986157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.444 [2024-10-08 19:05:48.986168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.444 [2024-10-08 19:05:48.986271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.444 [2024-10-08 19:05:48.986284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:20.444 [2024-10-08 19:05:48.986295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.444 [2024-10-08 19:05:48.986305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.444 [2024-10-08 19:05:48.986348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.444 [2024-10-08 19:05:48.986365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:20.444 [2024-10-08 19:05:48.986376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.444 [2024-10-08 19:05:48.986386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.444 [2024-10-08 19:05:48.986494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.444 [2024-10-08 19:05:48.986508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:20.444 [2024-10-08 19:05:48.986519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.444 [2024-10-08 19:05:48.986529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.444 [2024-10-08 19:05:48.986580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.444 [2024-10-08 19:05:48.986593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:20.444 [2024-10-08 19:05:48.986609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.444 [2024-10-08 19:05:48.986619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.444 [2024-10-08 19:05:48.986660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.444 [2024-10-08 19:05:48.986673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:20.444 [2024-10-08 19:05:48.986683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.444 [2024-10-08 19:05:48.986693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.444 [2024-10-08 19:05:48.986738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.444 [2024-10-08 19:05:48.986755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:20.444 [2024-10-08 19:05:48.986766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.444 [2024-10-08 19:05:48.986776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.444 [2024-10-08 19:05:48.986894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 671.757 ms, result 0 00:38:22.391 00:38:22.391 00:38:22.391 19:05:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:38:24.294 19:05:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:24.553 [2024-10-08 19:05:53.113980] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:38:24.553 [2024-10-08 19:05:53.114137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80547 ] 00:38:24.553 [2024-10-08 19:05:53.276872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.121 [2024-10-08 19:05:53.567990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.380 [2024-10-08 19:05:53.959505] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:25.380 [2024-10-08 19:05:53.959590] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:25.380 [2024-10-08 19:05:54.124315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.380 [2024-10-08 19:05:54.124394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:25.380 [2024-10-08 19:05:54.124415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:25.380 [2024-10-08 19:05:54.124433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.380 [2024-10-08 19:05:54.124506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.380 [2024-10-08 19:05:54.124521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:25.380 [2024-10-08 19:05:54.124534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:38:25.380 [2024-10-08 19:05:54.124547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.380 [2024-10-08 19:05:54.124574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:25.380 [2024-10-08 19:05:54.125751] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:25.380 [2024-10-08 19:05:54.125789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.380 [2024-10-08 19:05:54.125802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:25.380 [2024-10-08 19:05:54.125816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:38:25.380 [2024-10-08 19:05:54.125829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.380 [2024-10-08 19:05:54.127491] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:25.640 [2024-10-08 19:05:54.149818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.149923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:25.640 [2024-10-08 19:05:54.149943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.324 ms 00:38:25.640 [2024-10-08 19:05:54.149965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.150096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.150112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:25.640 [2024-10-08 19:05:54.150125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:38:25.640 [2024-10-08 19:05:54.150136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.158027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.158084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:25.640 [2024-10-08 19:05:54.158102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.773 ms 00:38:25.640 [2024-10-08 19:05:54.158114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.158220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.158239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:25.640 [2024-10-08 19:05:54.158253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:38:25.640 [2024-10-08 19:05:54.158265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.158329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.158344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:25.640 [2024-10-08 19:05:54.158357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:38:25.640 [2024-10-08 19:05:54.158369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.158401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:25.640 [2024-10-08 19:05:54.164088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.164151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:25.640 [2024-10-08 19:05:54.164169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.694 ms 00:38:25.640 [2024-10-08 19:05:54.164181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.164235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.164248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:25.640 [2024-10-08 19:05:54.164262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:38:25.640 [2024-10-08 19:05:54.164274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.164363] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:25.640 [2024-10-08 19:05:54.164392] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:25.640 [2024-10-08 19:05:54.164434] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:25.640 [2024-10-08 19:05:54.164455] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:38:25.640 [2024-10-08 19:05:54.164562] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:25.640 [2024-10-08 19:05:54.164578] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:25.640 [2024-10-08 19:05:54.164593] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:25.640 [2024-10-08 19:05:54.164613] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:25.640 [2024-10-08 19:05:54.164627] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:25.640 [2024-10-08 19:05:54.164641] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:25.640 [2024-10-08 19:05:54.164652] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:25.640 [2024-10-08 19:05:54.164664] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:25.640 [2024-10-08 19:05:54.164676] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:25.640 [2024-10-08 19:05:54.164689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.164700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:25.640 [2024-10-08 19:05:54.164712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:38:25.640 [2024-10-08 19:05:54.164724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.164823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.640 [2024-10-08 19:05:54.164845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:25.640 [2024-10-08 19:05:54.164857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:38:25.640 [2024-10-08 19:05:54.164868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.640 [2024-10-08 19:05:54.164991] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:25.640 [2024-10-08 19:05:54.165011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:25.640 [2024-10-08 19:05:54.165023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:25.640 [2024-10-08 19:05:54.165034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:25.640 [2024-10-08 19:05:54.165046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:25.640 [2024-10-08 19:05:54.165057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:25.640 [2024-10-08 19:05:54.165068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:25.640 [2024-10-08 19:05:54.165079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:25.640 [2024-10-08 19:05:54.165089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:25.640 [2024-10-08 19:05:54.165099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:25.640 [2024-10-08 19:05:54.165109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:25.640 [2024-10-08 19:05:54.165121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:25.640 [2024-10-08 19:05:54.165131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:25.640 [2024-10-08 19:05:54.165153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:25.640 [2024-10-08 19:05:54.165164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:25.640 [2024-10-08 19:05:54.165174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:25.640 [2024-10-08 19:05:54.165185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:25.640 [2024-10-08 19:05:54.165195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:25.640 [2024-10-08 19:05:54.165206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:25.640 [2024-10-08 19:05:54.165216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:25.640 [2024-10-08 19:05:54.165226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:25.641 [2024-10-08 19:05:54.165237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:25.641 [2024-10-08 19:05:54.165247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:25.641 [2024-10-08 19:05:54.165257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:25.641 [2024-10-08 19:05:54.165267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:25.641 [2024-10-08 19:05:54.165277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:25.641 [2024-10-08 19:05:54.165287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:25.641 [2024-10-08 19:05:54.165297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:25.641 [2024-10-08 19:05:54.165308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:25.641 [2024-10-08 19:05:54.165318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:25.641 [2024-10-08 19:05:54.165328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:25.641 [2024-10-08 19:05:54.165337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:25.641 [2024-10-08 19:05:54.165348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:25.641 [2024-10-08 19:05:54.165358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:25.641 [2024-10-08 19:05:54.165368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:25.641 [2024-10-08 19:05:54.165379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:25.641 [2024-10-08 19:05:54.165389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:25.641 [2024-10-08 19:05:54.165399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:25.641 [2024-10-08 19:05:54.165409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:25.641 [2024-10-08 19:05:54.165418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:25.641 [2024-10-08 19:05:54.165428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:25.641 [2024-10-08 19:05:54.165438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:25.641 [2024-10-08 19:05:54.165448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:25.641 [2024-10-08 19:05:54.165459] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:25.641 [2024-10-08 19:05:54.165470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:25.641 [2024-10-08 19:05:54.165486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:25.641 [2024-10-08 19:05:54.165497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:25.641 [2024-10-08 19:05:54.165509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:25.641 [2024-10-08 19:05:54.165519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:25.641 [2024-10-08 19:05:54.165530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:25.641 [2024-10-08 19:05:54.165540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:25.641 [2024-10-08 19:05:54.165550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:25.641 [2024-10-08 19:05:54.165560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:25.641 [2024-10-08 19:05:54.165572] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:25.641 [2024-10-08 19:05:54.165586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:25.641 [2024-10-08 19:05:54.165598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:25.641 [2024-10-08 19:05:54.165610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:25.641 [2024-10-08 19:05:54.165621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:25.641 [2024-10-08 19:05:54.165633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:25.641 [2024-10-08 19:05:54.165644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:25.641 [2024-10-08 19:05:54.165656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:25.641 [2024-10-08 19:05:54.165667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:25.641 [2024-10-08 19:05:54.165678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:25.641 [2024-10-08 19:05:54.165690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:25.641 [2024-10-08 19:05:54.165701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:25.641 [2024-10-08 19:05:54.165713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:25.641 [2024-10-08 19:05:54.165724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:25.641 [2024-10-08 19:05:54.165735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:25.641 [2024-10-08 19:05:54.165747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:25.641 [2024-10-08 19:05:54.165758] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:25.641 [2024-10-08 19:05:54.165770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:25.641 [2024-10-08 19:05:54.165782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:25.641 [2024-10-08 19:05:54.165794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:25.641 [2024-10-08 19:05:54.165805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:25.641 [2024-10-08 19:05:54.165817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:25.641 [2024-10-08 19:05:54.165834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.165846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:25.641 [2024-10-08 19:05:54.165857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:38:25.641 [2024-10-08 19:05:54.165868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.219556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.219626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:25.641 [2024-10-08 19:05:54.219646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.623 ms 00:38:25.641 [2024-10-08 19:05:54.219658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.219774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.219787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:25.641 [2024-10-08 19:05:54.219799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:38:25.641 [2024-10-08 19:05:54.219810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.269233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.269293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:25.641 [2024-10-08 19:05:54.269332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.316 ms 00:38:25.641 [2024-10-08 19:05:54.269345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.269415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.269427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:25.641 [2024-10-08 19:05:54.269440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:25.641 [2024-10-08 19:05:54.269451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.269959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.269995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:25.641 [2024-10-08 19:05:54.270009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:38:25.641 [2024-10-08 19:05:54.270027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.270159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.270176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:25.641 [2024-10-08 19:05:54.270188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:38:25.641 [2024-10-08 19:05:54.270200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.289950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.290024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:25.641 [2024-10-08 19:05:54.290059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.725 ms 00:38:25.641 [2024-10-08 19:05:54.290071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.311279] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:38:25.641 [2024-10-08 19:05:54.311356] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:38:25.641 [2024-10-08 19:05:54.311384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.311397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:38:25.641 [2024-10-08 19:05:54.311413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.148 ms 00:38:25.641 [2024-10-08 19:05:54.311424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.344623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.344729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:38:25.641 [2024-10-08 19:05:54.344749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.098 ms 00:38:25.641 [2024-10-08 19:05:54.344761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.366114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.366208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:38:25.641 [2024-10-08 19:05:54.366227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.237 ms 00:38:25.641 [2024-10-08 19:05:54.366238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.641 [2024-10-08 19:05:54.388099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.641 [2024-10-08 19:05:54.388199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:38:25.641 [2024-10-08 19:05:54.388219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.751 ms 00:38:25.642 [2024-10-08 19:05:54.388230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.642 [2024-10-08 19:05:54.389183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.642 [2024-10-08 19:05:54.389221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:25.642 [2024-10-08 19:05:54.389236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:38:25.642 [2024-10-08 19:05:54.389249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.484700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.900 [2024-10-08 19:05:54.484795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:38:25.900 [2024-10-08 19:05:54.484815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.416 ms 00:38:25.900 [2024-10-08 19:05:54.484827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.499053] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:38:25.900 [2024-10-08 19:05:54.502564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.900 [2024-10-08 19:05:54.502618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:25.900 [2024-10-08 19:05:54.502653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.635 ms 00:38:25.900 [2024-10-08 19:05:54.502672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.502818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.900 [2024-10-08 19:05:54.502833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:38:25.900 [2024-10-08 19:05:54.502846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:38:25.900 [2024-10-08 19:05:54.502857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.504609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.900 [2024-10-08 19:05:54.504659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:25.900 [2024-10-08 19:05:54.504674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:38:25.900 [2024-10-08 19:05:54.504685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.504737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.900 [2024-10-08 19:05:54.504750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:25.900 [2024-10-08 19:05:54.504762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:25.900 [2024-10-08 19:05:54.504773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.504812] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:38:25.900 [2024-10-08 19:05:54.504826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.900 [2024-10-08 19:05:54.504837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:38:25.900 [2024-10-08 19:05:54.504848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:38:25.900 [2024-10-08 19:05:54.504864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.548350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.900 [2024-10-08 19:05:54.548439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:25.900 [2024-10-08 19:05:54.548459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.458 ms 00:38:25.900 [2024-10-08 19:05:54.548472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.548604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:25.900 [2024-10-08 19:05:54.548619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:25.900 [2024-10-08 19:05:54.548632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:38:25.900 [2024-10-08 19:05:54.548644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:25.900 [2024-10-08 19:05:54.550000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 425.133 ms, result 0 00:38:27.277  [2024-10-08T19:05:57.002Z] Copying: 912/1048576 [kB] (912 kBps) [2024-10-08T19:05:57.937Z] Copying: 5320/1048576 [kB] (4408 kBps) [2024-10-08T19:05:58.874Z] Copying: 35/1024 [MB] (29 MBps) [2024-10-08T19:05:59.810Z] Copying: 67/1024 [MB] (31 MBps) [2024-10-08T19:06:01.189Z] Copying: 100/1024 [MB] (33 MBps) [2024-10-08T19:06:02.125Z] Copying: 136/1024 [MB] (36 MBps) [2024-10-08T19:06:03.107Z] Copying: 174/1024 [MB] (38 MBps) [2024-10-08T19:06:04.044Z] Copying: 212/1024 [MB] (38 MBps) [2024-10-08T19:06:04.980Z] Copying: 251/1024 [MB] (38 MBps) [2024-10-08T19:06:05.917Z] Copying: 290/1024 [MB] (38 MBps) [2024-10-08T19:06:06.854Z] Copying: 328/1024 [MB] (38 MBps) [2024-10-08T19:06:08.230Z] Copying: 365/1024 [MB] (37 MBps) [2024-10-08T19:06:08.800Z] Copying: 402/1024 [MB] (37 MBps) [2024-10-08T19:06:10.176Z] Copying: 441/1024 [MB] (38 MBps) [2024-10-08T19:06:11.143Z] Copying: 481/1024 [MB] (39 MBps) [2024-10-08T19:06:12.080Z] Copying: 520/1024 [MB] (39 MBps) [2024-10-08T19:06:13.017Z] Copying: 562/1024 [MB] (41 MBps) [2024-10-08T19:06:13.955Z] Copying: 602/1024 [MB] (39 MBps) [2024-10-08T19:06:14.894Z] Copying: 640/1024 [MB] (38 MBps) [2024-10-08T19:06:15.832Z] Copying: 677/1024 [MB] (37 MBps) [2024-10-08T19:06:17.211Z] Copying: 716/1024 [MB] (38 MBps) [2024-10-08T19:06:18.149Z] Copying: 752/1024 [MB] (36 MBps) [2024-10-08T19:06:19.088Z] Copying: 791/1024 [MB] (39 MBps) [2024-10-08T19:06:20.026Z] Copying: 828/1024 [MB] (36 MBps) [2024-10-08T19:06:20.963Z] Copying: 868/1024 [MB] (39 MBps) [2024-10-08T19:06:21.901Z] Copying: 908/1024 [MB] (40 MBps) [2024-10-08T19:06:22.839Z] Copying: 946/1024 [MB] (37 MBps) [2024-10-08T19:06:23.777Z] Copying: 985/1024 [MB] (39 MBps) [2024-10-08T19:06:24.036Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-10-08 19:06:24.027117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.279 [2024-10-08 19:06:24.027188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:55.279 [2024-10-08 19:06:24.027207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:55.279 [2024-10-08 19:06:24.027219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.280 [2024-10-08 19:06:24.027244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:55.280 [2024-10-08 19:06:24.031781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.280 [2024-10-08 19:06:24.031824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:55.280 [2024-10-08 19:06:24.031837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.518 ms 00:38:55.280 [2024-10-08 19:06:24.031849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.280 [2024-10-08 19:06:24.032071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.280 [2024-10-08 19:06:24.032095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:55.280 [2024-10-08 19:06:24.032106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:38:55.280 [2024-10-08 19:06:24.032116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.540 [2024-10-08 19:06:24.041499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.540 [2024-10-08 19:06:24.041542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:55.540 [2024-10-08 19:06:24.041558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.364 ms 00:38:55.541 [2024-10-08 19:06:24.041577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.046874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.541 [2024-10-08 19:06:24.046925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:55.541 [2024-10-08 19:06:24.046939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.260 ms 00:38:55.541 [2024-10-08 19:06:24.046949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.084229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.541 [2024-10-08 19:06:24.084267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:55.541 [2024-10-08 19:06:24.084282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.202 ms 00:38:55.541 [2024-10-08 19:06:24.084293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.104658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.541 [2024-10-08 19:06:24.104694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:55.541 [2024-10-08 19:06:24.104708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.325 ms 00:38:55.541 [2024-10-08 19:06:24.104718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.106526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.541 [2024-10-08 19:06:24.106557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:55.541 [2024-10-08 19:06:24.106570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:38:55.541 [2024-10-08 19:06:24.106580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.142700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.541 [2024-10-08 19:06:24.142737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:55.541 [2024-10-08 19:06:24.142751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.103 ms 00:38:55.541 [2024-10-08 19:06:24.142761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.179748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.541 [2024-10-08 19:06:24.179782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:55.541 [2024-10-08 19:06:24.179795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.948 ms 00:38:55.541 [2024-10-08 19:06:24.179806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.216109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.541 [2024-10-08 19:06:24.216146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:55.541 [2024-10-08 19:06:24.216159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.265 ms 00:38:55.541 [2024-10-08 19:06:24.216170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.252916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.541 [2024-10-08 19:06:24.252951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:55.541 [2024-10-08 19:06:24.252971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.665 ms 00:38:55.541 [2024-10-08 19:06:24.252981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.541 [2024-10-08 19:06:24.253019] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:55.541 [2024-10-08 19:06:24.253036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:38:55.541 [2024-10-08 19:06:24.253055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:38:55.541 [2024-10-08 19:06:24.253067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:55.541 [2024-10-08 19:06:24.253657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.253999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:55.542 [2024-10-08 19:06:24.254168] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:55.542 [2024-10-08 19:06:24.254178] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a380665-21e3-4637-8cd0-a3b526ef9bbe 00:38:55.542 [2024-10-08 19:06:24.254194] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:38:55.542 [2024-10-08 19:06:24.254204] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 142016 00:38:55.542 [2024-10-08 19:06:24.254214] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 140032 00:38:55.542 [2024-10-08 19:06:24.254225] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0142 00:38:55.542 [2024-10-08 19:06:24.254235] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:55.542 [2024-10-08 19:06:24.254245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:55.542 [2024-10-08 19:06:24.254256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:55.542 [2024-10-08 19:06:24.254265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:55.542 [2024-10-08 19:06:24.254274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:55.542 [2024-10-08 19:06:24.254284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.542 [2024-10-08 19:06:24.254299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:55.542 [2024-10-08 19:06:24.254321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:38:55.542 [2024-10-08 19:06:24.254335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.542 [2024-10-08 19:06:24.274389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.542 [2024-10-08 19:06:24.274420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:55.542 [2024-10-08 19:06:24.274433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.016 ms 00:38:55.542 [2024-10-08 19:06:24.274443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.542 [2024-10-08 19:06:24.275053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:55.542 [2024-10-08 19:06:24.275077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:55.542 [2024-10-08 19:06:24.275089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:38:55.542 [2024-10-08 19:06:24.275099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.802 [2024-10-08 19:06:24.321621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.802 [2024-10-08 19:06:24.321658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:55.802 [2024-10-08 19:06:24.321688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.802 [2024-10-08 19:06:24.321698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.802 [2024-10-08 19:06:24.321755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.802 [2024-10-08 19:06:24.321772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:55.802 [2024-10-08 19:06:24.321783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.802 [2024-10-08 19:06:24.321793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.802 [2024-10-08 19:06:24.321857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.802 [2024-10-08 19:06:24.321869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:55.802 [2024-10-08 19:06:24.321880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.802 [2024-10-08 19:06:24.321890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.802 [2024-10-08 19:06:24.321907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.802 [2024-10-08 19:06:24.321918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:55.802 [2024-10-08 19:06:24.321932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.802 [2024-10-08 19:06:24.321942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.802 [2024-10-08 19:06:24.446729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.802 [2024-10-08 19:06:24.446796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:55.802 [2024-10-08 19:06:24.446812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.802 [2024-10-08 19:06:24.446824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.802 [2024-10-08 19:06:24.550000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.802 [2024-10-08 19:06:24.550083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:55.802 [2024-10-08 19:06:24.550100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.802 [2024-10-08 19:06:24.550112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.802 [2024-10-08 19:06:24.550202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.802 [2024-10-08 19:06:24.550215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:55.802 [2024-10-08 19:06:24.550226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.802 [2024-10-08 19:06:24.550237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.803 [2024-10-08 19:06:24.550296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.803 [2024-10-08 19:06:24.550308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:55.803 [2024-10-08 19:06:24.550318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.803 [2024-10-08 19:06:24.550332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.803 [2024-10-08 19:06:24.550454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.803 [2024-10-08 19:06:24.550467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:55.803 [2024-10-08 19:06:24.550478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.803 [2024-10-08 19:06:24.550496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.803 [2024-10-08 19:06:24.550532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.803 [2024-10-08 19:06:24.550545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:55.803 [2024-10-08 19:06:24.550556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.803 [2024-10-08 19:06:24.550566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.803 [2024-10-08 19:06:24.550608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.803 [2024-10-08 19:06:24.550620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:55.803 [2024-10-08 19:06:24.550631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.803 [2024-10-08 19:06:24.550641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.803 [2024-10-08 19:06:24.550685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:55.803 [2024-10-08 19:06:24.550697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:55.803 [2024-10-08 19:06:24.550707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:55.803 [2024-10-08 19:06:24.550721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:55.803 [2024-10-08 19:06:24.550838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.689 ms, result 0 00:38:57.185 00:38:57.185 00:38:57.185 19:06:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:38:59.091 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:38:59.091 19:06:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:59.091 [2024-10-08 19:06:27.665753] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:38:59.091 [2024-10-08 19:06:27.665900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80894 ] 00:38:59.091 [2024-10-08 19:06:27.836731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:59.350 [2024-10-08 19:06:28.098374] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:38:59.920 [2024-10-08 19:06:28.460070] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:59.920 [2024-10-08 19:06:28.460137] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:38:59.920 [2024-10-08 19:06:28.623119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.623178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:59.920 [2024-10-08 19:06:28.623197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:59.920 [2024-10-08 19:06:28.623218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.623292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.623307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:59.920 [2024-10-08 19:06:28.623320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:38:59.920 [2024-10-08 19:06:28.623331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.623357] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:59.920 [2024-10-08 19:06:28.624576] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:59.920 [2024-10-08 19:06:28.624615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.624628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:59.920 [2024-10-08 19:06:28.624640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:38:59.920 [2024-10-08 19:06:28.624652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.626199] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:38:59.920 [2024-10-08 19:06:28.646960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.647010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:38:59.920 [2024-10-08 19:06:28.647026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.754 ms 00:38:59.920 [2024-10-08 19:06:28.647037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.647114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.647127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:38:59.920 [2024-10-08 19:06:28.647139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:38:59.920 [2024-10-08 19:06:28.647149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.654173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.654209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:59.920 [2024-10-08 19:06:28.654222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.942 ms 00:38:59.920 [2024-10-08 19:06:28.654233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.654333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.654348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:59.920 [2024-10-08 19:06:28.654359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:38:59.920 [2024-10-08 19:06:28.654369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.654419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.654432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:59.920 [2024-10-08 19:06:28.654442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:38:59.920 [2024-10-08 19:06:28.654452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.654480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:59.920 [2024-10-08 19:06:28.659146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.659182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:59.920 [2024-10-08 19:06:28.659195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.674 ms 00:38:59.920 [2024-10-08 19:06:28.659205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.659237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.659248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:59.920 [2024-10-08 19:06:28.659259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:38:59.920 [2024-10-08 19:06:28.659268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.659329] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:38:59.920 [2024-10-08 19:06:28.659382] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:38:59.920 [2024-10-08 19:06:28.659424] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:38:59.920 [2024-10-08 19:06:28.659442] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:38:59.920 [2024-10-08 19:06:28.659534] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:59.920 [2024-10-08 19:06:28.659547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:59.920 [2024-10-08 19:06:28.659561] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:59.920 [2024-10-08 19:06:28.659578] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:59.920 [2024-10-08 19:06:28.659590] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:59.920 [2024-10-08 19:06:28.659603] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:59.920 [2024-10-08 19:06:28.659612] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:59.920 [2024-10-08 19:06:28.659623] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:59.920 [2024-10-08 19:06:28.659633] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:59.920 [2024-10-08 19:06:28.659644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.659655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:59.920 [2024-10-08 19:06:28.659666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:38:59.920 [2024-10-08 19:06:28.659676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.659751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.920 [2024-10-08 19:06:28.659766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:59.920 [2024-10-08 19:06:28.659777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:38:59.920 [2024-10-08 19:06:28.659787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.920 [2024-10-08 19:06:28.659885] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:59.920 [2024-10-08 19:06:28.659900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:59.920 [2024-10-08 19:06:28.659911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:59.920 [2024-10-08 19:06:28.659921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:59.920 [2024-10-08 19:06:28.659932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:59.920 [2024-10-08 19:06:28.659941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:59.920 [2024-10-08 19:06:28.659951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:59.920 [2024-10-08 19:06:28.659973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:59.920 [2024-10-08 19:06:28.659983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:59.920 [2024-10-08 19:06:28.659992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:59.920 [2024-10-08 19:06:28.660004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:59.920 [2024-10-08 19:06:28.660014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:59.920 [2024-10-08 19:06:28.660024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:59.920 [2024-10-08 19:06:28.660042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:59.921 [2024-10-08 19:06:28.660053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:59.921 [2024-10-08 19:06:28.660063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:59.921 [2024-10-08 19:06:28.660081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:59.921 [2024-10-08 19:06:28.660091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:59.921 [2024-10-08 19:06:28.660110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:59.921 [2024-10-08 19:06:28.660128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:59.921 [2024-10-08 19:06:28.660137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:59.921 [2024-10-08 19:06:28.660156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:59.921 [2024-10-08 19:06:28.660166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:59.921 [2024-10-08 19:06:28.660184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:59.921 [2024-10-08 19:06:28.660193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:59.921 [2024-10-08 19:06:28.660211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:59.921 [2024-10-08 19:06:28.660220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:59.921 [2024-10-08 19:06:28.660238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:59.921 [2024-10-08 19:06:28.660247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:59.921 [2024-10-08 19:06:28.660256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:59.921 [2024-10-08 19:06:28.660266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:59.921 [2024-10-08 19:06:28.660275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:59.921 [2024-10-08 19:06:28.660284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:59.921 [2024-10-08 19:06:28.660302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:59.921 [2024-10-08 19:06:28.660312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660321] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:59.921 [2024-10-08 19:06:28.660331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:59.921 [2024-10-08 19:06:28.660346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:59.921 [2024-10-08 19:06:28.660356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:59.921 [2024-10-08 19:06:28.660367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:59.921 [2024-10-08 19:06:28.660377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:59.921 [2024-10-08 19:06:28.660386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:59.921 [2024-10-08 19:06:28.660396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:59.921 [2024-10-08 19:06:28.660405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:59.921 [2024-10-08 19:06:28.660414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:59.921 [2024-10-08 19:06:28.660424] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:59.921 [2024-10-08 19:06:28.660436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:59.921 [2024-10-08 19:06:28.660448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:59.921 [2024-10-08 19:06:28.660459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:59.921 [2024-10-08 19:06:28.660470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:59.921 [2024-10-08 19:06:28.660480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:59.921 [2024-10-08 19:06:28.660490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:59.921 [2024-10-08 19:06:28.660501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:59.921 [2024-10-08 19:06:28.660512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:59.921 [2024-10-08 19:06:28.660523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:59.921 [2024-10-08 19:06:28.660533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:59.921 [2024-10-08 19:06:28.660543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:59.921 [2024-10-08 19:06:28.660554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:59.921 [2024-10-08 19:06:28.660564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:59.921 [2024-10-08 19:06:28.660574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:59.921 [2024-10-08 19:06:28.660585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:59.921 [2024-10-08 19:06:28.660595] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:59.921 [2024-10-08 19:06:28.660606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:59.921 [2024-10-08 19:06:28.660617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:59.921 [2024-10-08 19:06:28.660634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:59.921 [2024-10-08 19:06:28.660645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:59.921 [2024-10-08 19:06:28.660656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:59.921 [2024-10-08 19:06:28.660667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.921 [2024-10-08 19:06:28.660679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:59.921 [2024-10-08 19:06:28.660689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.839 ms 00:38:59.921 [2024-10-08 19:06:28.660699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.715734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.715780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:00.181 [2024-10-08 19:06:28.715795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.982 ms 00:39:00.181 [2024-10-08 19:06:28.715807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.715898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.715909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:00.181 [2024-10-08 19:06:28.715920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:39:00.181 [2024-10-08 19:06:28.715931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.763436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.763481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:00.181 [2024-10-08 19:06:28.763499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.409 ms 00:39:00.181 [2024-10-08 19:06:28.763510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.763554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.763566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:00.181 [2024-10-08 19:06:28.763578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:39:00.181 [2024-10-08 19:06:28.763588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.764074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.764096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:00.181 [2024-10-08 19:06:28.764107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:39:00.181 [2024-10-08 19:06:28.764124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.764237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.764251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:00.181 [2024-10-08 19:06:28.764261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:39:00.181 [2024-10-08 19:06:28.764271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.782383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.782425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:00.181 [2024-10-08 19:06:28.782455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.089 ms 00:39:00.181 [2024-10-08 19:06:28.782466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.801714] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:00.181 [2024-10-08 19:06:28.801755] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:00.181 [2024-10-08 19:06:28.801771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.801798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:00.181 [2024-10-08 19:06:28.801810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.186 ms 00:39:00.181 [2024-10-08 19:06:28.801820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.831892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.831950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:00.181 [2024-10-08 19:06:28.831973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.027 ms 00:39:00.181 [2024-10-08 19:06:28.832000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.850599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.181 [2024-10-08 19:06:28.850638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:00.181 [2024-10-08 19:06:28.850667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.547 ms 00:39:00.181 [2024-10-08 19:06:28.850676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.181 [2024-10-08 19:06:28.868353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.182 [2024-10-08 19:06:28.868391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:00.182 [2024-10-08 19:06:28.868404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.623 ms 00:39:00.182 [2024-10-08 19:06:28.868414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.182 [2024-10-08 19:06:28.869272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.182 [2024-10-08 19:06:28.869304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:00.182 [2024-10-08 19:06:28.869316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:39:00.182 [2024-10-08 19:06:28.869327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:28.956290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.441 [2024-10-08 19:06:28.956359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:00.441 [2024-10-08 19:06:28.956376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.940 ms 00:39:00.441 [2024-10-08 19:06:28.956387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:28.967736] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:39:00.441 [2024-10-08 19:06:28.970813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.441 [2024-10-08 19:06:28.970846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:00.441 [2024-10-08 19:06:28.970860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.361 ms 00:39:00.441 [2024-10-08 19:06:28.970876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:28.971001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.441 [2024-10-08 19:06:28.971014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:00.441 [2024-10-08 19:06:28.971027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:00.441 [2024-10-08 19:06:28.971037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:28.971948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.441 [2024-10-08 19:06:28.971991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:00.441 [2024-10-08 19:06:28.972005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:39:00.441 [2024-10-08 19:06:28.972016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:28.972047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.441 [2024-10-08 19:06:28.972060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:00.441 [2024-10-08 19:06:28.972071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:00.441 [2024-10-08 19:06:28.972082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:28.972119] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:00.441 [2024-10-08 19:06:28.972132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.441 [2024-10-08 19:06:28.972143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:00.441 [2024-10-08 19:06:28.972154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:39:00.441 [2024-10-08 19:06:28.972169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:29.009820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.441 [2024-10-08 19:06:29.009864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:00.441 [2024-10-08 19:06:29.009879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.629 ms 00:39:00.441 [2024-10-08 19:06:29.009890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:29.009981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.441 [2024-10-08 19:06:29.009996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:00.441 [2024-10-08 19:06:29.010007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:39:00.441 [2024-10-08 19:06:29.010017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.441 [2024-10-08 19:06:29.011222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 387.591 ms, result 0 00:39:01.820  [2024-10-08T19:06:31.518Z] Copying: 31/1024 [MB] (31 MBps) [2024-10-08T19:06:32.455Z] Copying: 62/1024 [MB] (31 MBps) [2024-10-08T19:06:33.393Z] Copying: 93/1024 [MB] (31 MBps) [2024-10-08T19:06:34.331Z] Copying: 123/1024 [MB] (29 MBps) [2024-10-08T19:06:35.269Z] Copying: 154/1024 [MB] (31 MBps) [2024-10-08T19:06:36.648Z] Copying: 186/1024 [MB] (31 MBps) [2024-10-08T19:06:37.360Z] Copying: 216/1024 [MB] (30 MBps) [2024-10-08T19:06:38.298Z] Copying: 248/1024 [MB] (31 MBps) [2024-10-08T19:06:39.673Z] Copying: 279/1024 [MB] (31 MBps) [2024-10-08T19:06:40.240Z] Copying: 310/1024 [MB] (31 MBps) [2024-10-08T19:06:41.617Z] Copying: 342/1024 [MB] (32 MBps) [2024-10-08T19:06:42.552Z] Copying: 374/1024 [MB] (31 MBps) [2024-10-08T19:06:43.487Z] Copying: 405/1024 [MB] (31 MBps) [2024-10-08T19:06:44.421Z] Copying: 436/1024 [MB] (30 MBps) [2024-10-08T19:06:45.387Z] Copying: 466/1024 [MB] (30 MBps) [2024-10-08T19:06:46.322Z] Copying: 498/1024 [MB] (32 MBps) [2024-10-08T19:06:47.259Z] Copying: 526/1024 [MB] (27 MBps) [2024-10-08T19:06:48.635Z] Copying: 555/1024 [MB] (28 MBps) [2024-10-08T19:06:49.571Z] Copying: 583/1024 [MB] (28 MBps) [2024-10-08T19:06:50.506Z] Copying: 611/1024 [MB] (28 MBps) [2024-10-08T19:06:51.442Z] Copying: 640/1024 [MB] (28 MBps) [2024-10-08T19:06:52.379Z] Copying: 669/1024 [MB] (28 MBps) [2024-10-08T19:06:53.316Z] Copying: 697/1024 [MB] (28 MBps) [2024-10-08T19:06:54.253Z] Copying: 725/1024 [MB] (27 MBps) [2024-10-08T19:06:55.631Z] Copying: 753/1024 [MB] (28 MBps) [2024-10-08T19:06:56.568Z] Copying: 781/1024 [MB] (28 MBps) [2024-10-08T19:06:57.532Z] Copying: 810/1024 [MB] (28 MBps) [2024-10-08T19:06:58.468Z] Copying: 839/1024 [MB] (28 MBps) [2024-10-08T19:06:59.404Z] Copying: 867/1024 [MB] (27 MBps) [2024-10-08T19:07:00.352Z] Copying: 896/1024 [MB] (29 MBps) [2024-10-08T19:07:01.288Z] Copying: 926/1024 [MB] (29 MBps) [2024-10-08T19:07:02.665Z] Copying: 958/1024 [MB] (31 MBps) [2024-10-08T19:07:03.602Z] Copying: 987/1024 [MB] (29 MBps) [2024-10-08T19:07:03.602Z] Copying: 1020/1024 [MB] (32 MBps) [2024-10-08T19:07:04.171Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-10-08 19:07:04.091300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.414 [2024-10-08 19:07:04.091406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:35.414 [2024-10-08 19:07:04.091434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:35.414 [2024-10-08 19:07:04.091459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.414 [2024-10-08 19:07:04.091494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:35.414 [2024-10-08 19:07:04.096574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.414 [2024-10-08 19:07:04.096615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:35.414 [2024-10-08 19:07:04.096628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.059 ms 00:39:35.414 [2024-10-08 19:07:04.096639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.414 [2024-10-08 19:07:04.096893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.414 [2024-10-08 19:07:04.096909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:35.414 [2024-10-08 19:07:04.096922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:39:35.414 [2024-10-08 19:07:04.096932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.414 [2024-10-08 19:07:04.099748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.414 [2024-10-08 19:07:04.099776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:35.414 [2024-10-08 19:07:04.099788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.793 ms 00:39:35.414 [2024-10-08 19:07:04.099799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.414 [2024-10-08 19:07:04.105472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.414 [2024-10-08 19:07:04.105507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:35.414 [2024-10-08 19:07:04.105521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.651 ms 00:39:35.414 [2024-10-08 19:07:04.105532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.414 [2024-10-08 19:07:04.144636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.414 [2024-10-08 19:07:04.144679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:35.414 [2024-10-08 19:07:04.144695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.021 ms 00:39:35.414 [2024-10-08 19:07:04.144706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.414 [2024-10-08 19:07:04.167281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.414 [2024-10-08 19:07:04.167329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:35.414 [2024-10-08 19:07:04.167344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.531 ms 00:39:35.414 [2024-10-08 19:07:04.167355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.414 [2024-10-08 19:07:04.169247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.414 [2024-10-08 19:07:04.169291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:35.414 [2024-10-08 19:07:04.169305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:39:35.414 [2024-10-08 19:07:04.169317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.675 [2024-10-08 19:07:04.207175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.675 [2024-10-08 19:07:04.207216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:35.675 [2024-10-08 19:07:04.207232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.838 ms 00:39:35.675 [2024-10-08 19:07:04.207243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.675 [2024-10-08 19:07:04.244367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.675 [2024-10-08 19:07:04.244407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:35.675 [2024-10-08 19:07:04.244427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.082 ms 00:39:35.675 [2024-10-08 19:07:04.244439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.675 [2024-10-08 19:07:04.281876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.675 [2024-10-08 19:07:04.281916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:35.675 [2024-10-08 19:07:04.281931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.391 ms 00:39:35.675 [2024-10-08 19:07:04.281941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.675 [2024-10-08 19:07:04.319204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.675 [2024-10-08 19:07:04.319243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:35.675 [2024-10-08 19:07:04.319258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.150 ms 00:39:35.675 [2024-10-08 19:07:04.319268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.675 [2024-10-08 19:07:04.319316] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:35.675 [2024-10-08 19:07:04.319336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:39:35.675 [2024-10-08 19:07:04.319351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:39:35.675 [2024-10-08 19:07:04.319371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:35.675 [2024-10-08 19:07:04.319581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.319992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:35.676 [2024-10-08 19:07:04.320507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:35.677 [2024-10-08 19:07:04.320527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:35.677 [2024-10-08 19:07:04.320538] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a380665-21e3-4637-8cd0-a3b526ef9bbe 00:39:35.677 [2024-10-08 19:07:04.320550] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:39:35.677 [2024-10-08 19:07:04.320561] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:35.677 [2024-10-08 19:07:04.320571] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:35.677 [2024-10-08 19:07:04.320582] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:35.677 [2024-10-08 19:07:04.320592] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:35.677 [2024-10-08 19:07:04.320605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:35.677 [2024-10-08 19:07:04.320621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:35.677 [2024-10-08 19:07:04.320631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:35.677 [2024-10-08 19:07:04.320640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:35.677 [2024-10-08 19:07:04.320650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.677 [2024-10-08 19:07:04.320673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:35.677 [2024-10-08 19:07:04.320684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:39:35.677 [2024-10-08 19:07:04.320695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.677 [2024-10-08 19:07:04.342417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.677 [2024-10-08 19:07:04.342450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:35.677 [2024-10-08 19:07:04.342463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.682 ms 00:39:35.677 [2024-10-08 19:07:04.342480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.677 [2024-10-08 19:07:04.343086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:35.677 [2024-10-08 19:07:04.343103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:35.677 [2024-10-08 19:07:04.343115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:39:35.677 [2024-10-08 19:07:04.343125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.677 [2024-10-08 19:07:04.392849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.677 [2024-10-08 19:07:04.392909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:35.677 [2024-10-08 19:07:04.392930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.677 [2024-10-08 19:07:04.392943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.677 [2024-10-08 19:07:04.393025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.677 [2024-10-08 19:07:04.393038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:35.677 [2024-10-08 19:07:04.393050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.677 [2024-10-08 19:07:04.393062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.677 [2024-10-08 19:07:04.393145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.677 [2024-10-08 19:07:04.393160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:35.677 [2024-10-08 19:07:04.393172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.677 [2024-10-08 19:07:04.393188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.677 [2024-10-08 19:07:04.393207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.677 [2024-10-08 19:07:04.393219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:35.677 [2024-10-08 19:07:04.393230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.677 [2024-10-08 19:07:04.393241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.936 [2024-10-08 19:07:04.535035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.937 [2024-10-08 19:07:04.535114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:35.937 [2024-10-08 19:07:04.535139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.937 [2024-10-08 19:07:04.535150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.937 [2024-10-08 19:07:04.645676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.937 [2024-10-08 19:07:04.645768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:35.937 [2024-10-08 19:07:04.645785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.937 [2024-10-08 19:07:04.645798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.937 [2024-10-08 19:07:04.645932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.937 [2024-10-08 19:07:04.645946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:35.937 [2024-10-08 19:07:04.645977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.937 [2024-10-08 19:07:04.645989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.937 [2024-10-08 19:07:04.646059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.937 [2024-10-08 19:07:04.646073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:35.937 [2024-10-08 19:07:04.646086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.937 [2024-10-08 19:07:04.646097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.937 [2024-10-08 19:07:04.646225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.937 [2024-10-08 19:07:04.646240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:35.937 [2024-10-08 19:07:04.646253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.937 [2024-10-08 19:07:04.646264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.937 [2024-10-08 19:07:04.646312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.937 [2024-10-08 19:07:04.646327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:35.937 [2024-10-08 19:07:04.646338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.937 [2024-10-08 19:07:04.646349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.937 [2024-10-08 19:07:04.646397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.937 [2024-10-08 19:07:04.646410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:35.937 [2024-10-08 19:07:04.646422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.937 [2024-10-08 19:07:04.646433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.937 [2024-10-08 19:07:04.646493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.937 [2024-10-08 19:07:04.646507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:35.937 [2024-10-08 19:07:04.646519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.937 [2024-10-08 19:07:04.646530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.937 [2024-10-08 19:07:04.646682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 555.346 ms, result 0 00:39:37.316 00:39:37.316 00:39:37.316 19:07:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:39:39.223 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:39:39.223 19:07:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:39:39.224 19:07:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:39:39.224 19:07:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:39.224 19:07:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:39:39.483 19:07:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:39:39.483 19:07:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:39.483 19:07:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:39:39.483 19:07:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79228 00:39:39.483 19:07:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 79228 ']' 00:39:39.483 19:07:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 79228 00:39:39.483 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79228) - No such process 00:39:39.483 Process with pid 79228 is not found 00:39:39.483 19:07:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 79228 is not found' 00:39:39.483 19:07:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:39:40.049 Remove shared memory files 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:39:40.049 ************************************ 00:39:40.049 END TEST ftl_dirty_shutdown 00:39:40.049 ************************************ 00:39:40.049 00:39:40.049 real 3m21.360s 00:39:40.049 user 3m47.302s 00:39:40.049 sys 0m39.530s 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:40.049 19:07:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:40.049 19:07:08 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:39:40.049 19:07:08 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:40.049 19:07:08 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:40.049 19:07:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:40.049 ************************************ 00:39:40.049 START TEST ftl_upgrade_shutdown 00:39:40.049 ************************************ 00:39:40.049 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:39:40.049 * Looking for test storage... 00:39:40.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:39:40.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.050 --rc genhtml_branch_coverage=1 00:39:40.050 --rc genhtml_function_coverage=1 00:39:40.050 --rc genhtml_legend=1 00:39:40.050 --rc geninfo_all_blocks=1 00:39:40.050 --rc geninfo_unexecuted_blocks=1 00:39:40.050 00:39:40.050 ' 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:39:40.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.050 --rc genhtml_branch_coverage=1 00:39:40.050 --rc genhtml_function_coverage=1 00:39:40.050 --rc genhtml_legend=1 00:39:40.050 --rc geninfo_all_blocks=1 00:39:40.050 --rc geninfo_unexecuted_blocks=1 00:39:40.050 00:39:40.050 ' 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:39:40.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.050 --rc genhtml_branch_coverage=1 00:39:40.050 --rc genhtml_function_coverage=1 00:39:40.050 --rc genhtml_legend=1 00:39:40.050 --rc geninfo_all_blocks=1 00:39:40.050 --rc geninfo_unexecuted_blocks=1 00:39:40.050 00:39:40.050 ' 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:39:40.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:40.050 --rc genhtml_branch_coverage=1 00:39:40.050 --rc genhtml_function_coverage=1 00:39:40.050 --rc genhtml_legend=1 00:39:40.050 --rc geninfo_all_blocks=1 00:39:40.050 --rc geninfo_unexecuted_blocks=1 00:39:40.050 00:39:40.050 ' 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:39:40.050 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81375 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81375 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81375 ']' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:40.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:40.310 19:07:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:40.310 [2024-10-08 19:07:08.965114] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:40.310 [2024-10-08 19:07:08.965570] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81375 ] 00:39:40.569 [2024-10-08 19:07:09.150832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.827 [2024-10-08 19:07:09.349499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:39:41.762 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:39:42.020 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:39:42.020 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:39:42.020 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:39:42.020 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:39:42.020 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:39:42.020 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:39:42.020 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:39:42.020 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:39:42.331 { 00:39:42.331 "name": "basen1", 00:39:42.331 "aliases": [ 00:39:42.331 "a0230fa7-1b8e-4768-a369-3c5f40318a1e" 00:39:42.331 ], 00:39:42.331 "product_name": "NVMe disk", 00:39:42.331 "block_size": 4096, 00:39:42.331 "num_blocks": 1310720, 00:39:42.331 "uuid": "a0230fa7-1b8e-4768-a369-3c5f40318a1e", 00:39:42.331 "numa_id": -1, 00:39:42.331 "assigned_rate_limits": { 00:39:42.331 "rw_ios_per_sec": 0, 00:39:42.331 "rw_mbytes_per_sec": 0, 00:39:42.331 "r_mbytes_per_sec": 0, 00:39:42.331 "w_mbytes_per_sec": 0 00:39:42.331 }, 00:39:42.331 "claimed": true, 00:39:42.331 "claim_type": "read_many_write_one", 00:39:42.331 "zoned": false, 00:39:42.331 "supported_io_types": { 00:39:42.331 "read": true, 00:39:42.331 "write": true, 00:39:42.331 "unmap": true, 00:39:42.331 "flush": true, 00:39:42.331 "reset": true, 00:39:42.331 "nvme_admin": true, 00:39:42.331 "nvme_io": true, 00:39:42.331 "nvme_io_md": false, 00:39:42.331 "write_zeroes": true, 00:39:42.331 "zcopy": false, 00:39:42.331 "get_zone_info": false, 00:39:42.331 "zone_management": false, 00:39:42.331 "zone_append": false, 00:39:42.331 "compare": true, 00:39:42.331 "compare_and_write": false, 00:39:42.331 "abort": true, 00:39:42.331 "seek_hole": false, 00:39:42.331 "seek_data": false, 00:39:42.331 "copy": true, 00:39:42.331 "nvme_iov_md": false 00:39:42.331 }, 00:39:42.331 "driver_specific": { 00:39:42.331 "nvme": [ 00:39:42.331 { 00:39:42.331 "pci_address": "0000:00:11.0", 00:39:42.331 "trid": { 00:39:42.331 "trtype": "PCIe", 00:39:42.331 "traddr": "0000:00:11.0" 00:39:42.331 }, 00:39:42.331 "ctrlr_data": { 00:39:42.331 "cntlid": 0, 00:39:42.331 "vendor_id": "0x1b36", 00:39:42.331 "model_number": "QEMU NVMe Ctrl", 00:39:42.331 "serial_number": "12341", 00:39:42.331 "firmware_revision": "8.0.0", 00:39:42.331 "subnqn": "nqn.2019-08.org.qemu:12341", 00:39:42.331 "oacs": { 00:39:42.331 "security": 0, 00:39:42.331 "format": 1, 00:39:42.331 "firmware": 0, 00:39:42.331 "ns_manage": 1 00:39:42.331 }, 00:39:42.331 "multi_ctrlr": false, 00:39:42.331 "ana_reporting": false 00:39:42.331 }, 00:39:42.331 "vs": { 00:39:42.331 "nvme_version": "1.4" 00:39:42.331 }, 00:39:42.331 "ns_data": { 00:39:42.331 "id": 1, 00:39:42.331 "can_share": false 00:39:42.331 } 00:39:42.331 } 00:39:42.331 ], 00:39:42.331 "mp_policy": "active_passive" 00:39:42.331 } 00:39:42.331 } 00:39:42.331 ]' 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:42.331 19:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:42.331 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5901a52a-9a5f-492f-83dc-c6737f33f35b 00:39:42.331 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:39:42.331 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5901a52a-9a5f-492f-83dc-c6737f33f35b 00:39:42.604 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:39:42.867 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=7ff34663-ba01-40f5-99d2-b96d864ac527 00:39:42.868 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7ff34663-ba01-40f5-99d2-b96d864ac527 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=e701d851-04aa-4235-aa68-14aa8007cf73 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z e701d851-04aa-4235-aa68-14aa8007cf73 ]] 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 e701d851-04aa-4235-aa68-14aa8007cf73 5120 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=e701d851-04aa-4235-aa68-14aa8007cf73 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size e701d851-04aa-4235-aa68-14aa8007cf73 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=e701d851-04aa-4235-aa68-14aa8007cf73 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:39:43.128 19:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e701d851-04aa-4235-aa68-14aa8007cf73 00:39:43.387 19:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:39:43.387 { 00:39:43.387 "name": "e701d851-04aa-4235-aa68-14aa8007cf73", 00:39:43.387 "aliases": [ 00:39:43.387 "lvs/basen1p0" 00:39:43.387 ], 00:39:43.387 "product_name": "Logical Volume", 00:39:43.387 "block_size": 4096, 00:39:43.387 "num_blocks": 5242880, 00:39:43.387 "uuid": "e701d851-04aa-4235-aa68-14aa8007cf73", 00:39:43.387 "assigned_rate_limits": { 00:39:43.387 "rw_ios_per_sec": 0, 00:39:43.387 "rw_mbytes_per_sec": 0, 00:39:43.387 "r_mbytes_per_sec": 0, 00:39:43.387 "w_mbytes_per_sec": 0 00:39:43.387 }, 00:39:43.387 "claimed": false, 00:39:43.387 "zoned": false, 00:39:43.387 "supported_io_types": { 00:39:43.387 "read": true, 00:39:43.387 "write": true, 00:39:43.387 "unmap": true, 00:39:43.387 "flush": false, 00:39:43.387 "reset": true, 00:39:43.387 "nvme_admin": false, 00:39:43.387 "nvme_io": false, 00:39:43.387 "nvme_io_md": false, 00:39:43.387 "write_zeroes": true, 00:39:43.387 "zcopy": false, 00:39:43.387 "get_zone_info": false, 00:39:43.387 "zone_management": false, 00:39:43.387 "zone_append": false, 00:39:43.387 "compare": false, 00:39:43.387 "compare_and_write": false, 00:39:43.387 "abort": false, 00:39:43.387 "seek_hole": true, 00:39:43.387 "seek_data": true, 00:39:43.387 "copy": false, 00:39:43.387 "nvme_iov_md": false 00:39:43.387 }, 00:39:43.387 "driver_specific": { 00:39:43.387 "lvol": { 00:39:43.387 "lvol_store_uuid": "7ff34663-ba01-40f5-99d2-b96d864ac527", 00:39:43.387 "base_bdev": "basen1", 00:39:43.387 "thin_provision": true, 00:39:43.387 "num_allocated_clusters": 0, 00:39:43.387 "snapshot": false, 00:39:43.387 "clone": false, 00:39:43.387 "esnap_clone": false 00:39:43.387 } 00:39:43.387 } 00:39:43.387 } 00:39:43.387 ]' 00:39:43.387 19:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:39:43.387 19:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:39:43.646 19:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:39:43.646 19:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:39:43.646 19:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:39:43.646 19:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:39:43.646 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:39:43.646 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:39:43.646 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:39:43.904 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:39:43.904 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:39:43.904 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:39:44.163 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:39:44.163 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:39:44.163 19:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d e701d851-04aa-4235-aa68-14aa8007cf73 -c cachen1p0 --l2p_dram_limit 2 00:39:44.423 [2024-10-08 19:07:13.037590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.037679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:39:44.423 [2024-10-08 19:07:13.037708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:39:44.423 [2024-10-08 19:07:13.037721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.037809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.037822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:39:44.423 [2024-10-08 19:07:13.037837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:39:44.423 [2024-10-08 19:07:13.037849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.037879] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:39:44.423 [2024-10-08 19:07:13.039004] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:39:44.423 [2024-10-08 19:07:13.039038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.039050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:39:44.423 [2024-10-08 19:07:13.039065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.163 ms 00:39:44.423 [2024-10-08 19:07:13.039079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.039176] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 86c06e3e-1e42-4b15-b343-0a472f3d71a8 00:39:44.423 [2024-10-08 19:07:13.041747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.041793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:39:44.423 [2024-10-08 19:07:13.041808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:39:44.423 [2024-10-08 19:07:13.041825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.056536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.056583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:39:44.423 [2024-10-08 19:07:13.056598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.638 ms 00:39:44.423 [2024-10-08 19:07:13.056613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.056689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.056709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:39:44.423 [2024-10-08 19:07:13.056722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:39:44.423 [2024-10-08 19:07:13.056743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.056824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.056841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:39:44.423 [2024-10-08 19:07:13.056852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:39:44.423 [2024-10-08 19:07:13.056867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.056901] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:39:44.423 [2024-10-08 19:07:13.063591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.063817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:39:44.423 [2024-10-08 19:07:13.063846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.699 ms 00:39:44.423 [2024-10-08 19:07:13.063858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.063904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.063916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:39:44.423 [2024-10-08 19:07:13.063932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:39:44.423 [2024-10-08 19:07:13.063946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.064004] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:39:44.423 [2024-10-08 19:07:13.064148] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:39:44.423 [2024-10-08 19:07:13.064172] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:39:44.423 [2024-10-08 19:07:13.064187] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:39:44.423 [2024-10-08 19:07:13.064209] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:39:44.423 [2024-10-08 19:07:13.064222] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:39:44.423 [2024-10-08 19:07:13.064237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:39:44.423 [2024-10-08 19:07:13.064248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:39:44.423 [2024-10-08 19:07:13.064263] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:39:44.423 [2024-10-08 19:07:13.064274] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:39:44.423 [2024-10-08 19:07:13.064288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.064299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:39:44.423 [2024-10-08 19:07:13.064314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:39:44.423 [2024-10-08 19:07:13.064325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.064405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.423 [2024-10-08 19:07:13.064434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:39:44.423 [2024-10-08 19:07:13.064449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:39:44.423 [2024-10-08 19:07:13.064460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.423 [2024-10-08 19:07:13.064559] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:39:44.423 [2024-10-08 19:07:13.064572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:39:44.423 [2024-10-08 19:07:13.064587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:39:44.423 [2024-10-08 19:07:13.064598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.064613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:39:44.423 [2024-10-08 19:07:13.064623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.064636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:39:44.423 [2024-10-08 19:07:13.064646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:39:44.423 [2024-10-08 19:07:13.064659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:39:44.423 [2024-10-08 19:07:13.064668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.064681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:39:44.423 [2024-10-08 19:07:13.064691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:39:44.423 [2024-10-08 19:07:13.064705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.064715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:39:44.423 [2024-10-08 19:07:13.064729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:39:44.423 [2024-10-08 19:07:13.064738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.064754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:39:44.423 [2024-10-08 19:07:13.064764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:39:44.423 [2024-10-08 19:07:13.064776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.064786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:39:44.423 [2024-10-08 19:07:13.064799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:39:44.423 [2024-10-08 19:07:13.064808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:44.423 [2024-10-08 19:07:13.064822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:39:44.423 [2024-10-08 19:07:13.064831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:39:44.423 [2024-10-08 19:07:13.064844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:44.423 [2024-10-08 19:07:13.064853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:39:44.423 [2024-10-08 19:07:13.064865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:39:44.423 [2024-10-08 19:07:13.064874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:44.423 [2024-10-08 19:07:13.064887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:39:44.423 [2024-10-08 19:07:13.064897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:39:44.423 [2024-10-08 19:07:13.064909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:44.423 [2024-10-08 19:07:13.064919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:39:44.423 [2024-10-08 19:07:13.064934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:39:44.423 [2024-10-08 19:07:13.064944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.064968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:39:44.423 [2024-10-08 19:07:13.064978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:39:44.423 [2024-10-08 19:07:13.064991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.065000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:39:44.423 [2024-10-08 19:07:13.065014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:39:44.423 [2024-10-08 19:07:13.065028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.065041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:39:44.423 [2024-10-08 19:07:13.065050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:39:44.423 [2024-10-08 19:07:13.065063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.423 [2024-10-08 19:07:13.065072] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:39:44.423 [2024-10-08 19:07:13.065087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:39:44.423 [2024-10-08 19:07:13.065101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:39:44.424 [2024-10-08 19:07:13.065116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:44.424 [2024-10-08 19:07:13.065127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:39:44.424 [2024-10-08 19:07:13.065146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:39:44.424 [2024-10-08 19:07:13.065156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:39:44.424 [2024-10-08 19:07:13.065180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:39:44.424 [2024-10-08 19:07:13.065190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:39:44.424 [2024-10-08 19:07:13.065203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:39:44.424 [2024-10-08 19:07:13.065219] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:39:44.424 [2024-10-08 19:07:13.065237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:39:44.424 [2024-10-08 19:07:13.065263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:39:44.424 [2024-10-08 19:07:13.065300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:39:44.424 [2024-10-08 19:07:13.065315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:39:44.424 [2024-10-08 19:07:13.065326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:39:44.424 [2024-10-08 19:07:13.065340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:39:44.424 [2024-10-08 19:07:13.065427] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:39:44.424 [2024-10-08 19:07:13.065442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:44.424 [2024-10-08 19:07:13.065469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:39:44.424 [2024-10-08 19:07:13.065480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:39:44.424 [2024-10-08 19:07:13.065494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:39:44.424 [2024-10-08 19:07:13.065504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:44.424 [2024-10-08 19:07:13.065518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:39:44.424 [2024-10-08 19:07:13.065529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.007 ms 00:39:44.424 [2024-10-08 19:07:13.065545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:44.424 [2024-10-08 19:07:13.065599] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:39:44.424 [2024-10-08 19:07:13.065619] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:39:46.996 [2024-10-08 19:07:15.521801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.996 [2024-10-08 19:07:15.522157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:39:46.996 [2024-10-08 19:07:15.522197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2456.184 ms 00:39:46.996 [2024-10-08 19:07:15.522219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.582084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.582155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:39:46.997 [2024-10-08 19:07:15.582180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 59.297 ms 00:39:46.997 [2024-10-08 19:07:15.582202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.582341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.582366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:39:46.997 [2024-10-08 19:07:15.582384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:39:46.997 [2024-10-08 19:07:15.582408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.649365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.649413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:39:46.997 [2024-10-08 19:07:15.649448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.856 ms 00:39:46.997 [2024-10-08 19:07:15.649463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.649503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.649518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:39:46.997 [2024-10-08 19:07:15.649545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:46.997 [2024-10-08 19:07:15.649559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.650100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.650119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:39:46.997 [2024-10-08 19:07:15.650142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.480 ms 00:39:46.997 [2024-10-08 19:07:15.650160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.650200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.650214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:39:46.997 [2024-10-08 19:07:15.650225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:39:46.997 [2024-10-08 19:07:15.650240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.670092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.670134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:39:46.997 [2024-10-08 19:07:15.670165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.831 ms 00:39:46.997 [2024-10-08 19:07:15.670179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.682824] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:39:46.997 [2024-10-08 19:07:15.683908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.683936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:39:46.997 [2024-10-08 19:07:15.683952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.635 ms 00:39:46.997 [2024-10-08 19:07:15.683978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.712709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.712900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:39:46.997 [2024-10-08 19:07:15.712931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.696 ms 00:39:46.997 [2024-10-08 19:07:15.712943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.713051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.713065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:39:46.997 [2024-10-08 19:07:15.713082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:39:46.997 [2024-10-08 19:07:15.713092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:46.997 [2024-10-08 19:07:15.750085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:46.997 [2024-10-08 19:07:15.750237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:39:46.997 [2024-10-08 19:07:15.750264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.933 ms 00:39:46.997 [2024-10-08 19:07:15.750275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.786753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.256 [2024-10-08 19:07:15.786791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:39:47.256 [2024-10-08 19:07:15.786808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.389 ms 00:39:47.256 [2024-10-08 19:07:15.786818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.787602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.256 [2024-10-08 19:07:15.787630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:39:47.256 [2024-10-08 19:07:15.787645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:39:47.256 [2024-10-08 19:07:15.787655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.885850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.256 [2024-10-08 19:07:15.885899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:39:47.256 [2024-10-08 19:07:15.885922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 98.118 ms 00:39:47.256 [2024-10-08 19:07:15.885936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.923924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.256 [2024-10-08 19:07:15.923975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:39:47.256 [2024-10-08 19:07:15.924009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.887 ms 00:39:47.256 [2024-10-08 19:07:15.924020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.961055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.256 [2024-10-08 19:07:15.961093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:39:47.256 [2024-10-08 19:07:15.961110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.987 ms 00:39:47.256 [2024-10-08 19:07:15.961136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.998257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.256 [2024-10-08 19:07:15.998296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:39:47.256 [2024-10-08 19:07:15.998312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.075 ms 00:39:47.256 [2024-10-08 19:07:15.998339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.998388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.256 [2024-10-08 19:07:15.998400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:39:47.256 [2024-10-08 19:07:15.998417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:39:47.256 [2024-10-08 19:07:15.998430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.998544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.256 [2024-10-08 19:07:15.998557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:39:47.256 [2024-10-08 19:07:15.998570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:39:47.256 [2024-10-08 19:07:15.998584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.256 [2024-10-08 19:07:15.999800] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2961.659 ms, result 0 00:39:47.256 { 00:39:47.256 "name": "ftl", 00:39:47.256 "uuid": "86c06e3e-1e42-4b15-b343-0a472f3d71a8" 00:39:47.256 } 00:39:47.515 19:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:39:47.515 [2024-10-08 19:07:16.246893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:47.515 19:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:39:47.774 19:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:39:48.032 [2024-10-08 19:07:16.647332] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:39:48.033 19:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:39:48.292 [2024-10-08 19:07:16.913365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:48.292 19:07:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:39:48.552 Fill FTL, iteration 1 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81493 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81493 /var/tmp/spdk.tgt.sock 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81493 ']' 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:48.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:48.552 19:07:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:48.811 [2024-10-08 19:07:17.358249] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:48.811 [2024-10-08 19:07:17.358388] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81493 ] 00:39:48.811 [2024-10-08 19:07:17.521136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.070 [2024-10-08 19:07:17.734290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:50.007 19:07:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:50.007 19:07:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:39:50.007 19:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:39:50.267 ftln1 00:39:50.267 19:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:39:50.267 19:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81493 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81493 ']' 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81493 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81493 00:39:50.526 killing process with pid 81493 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81493' 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81493 00:39:50.526 19:07:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81493 00:39:53.060 19:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:39:53.060 19:07:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:39:53.060 [2024-10-08 19:07:21.689325] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:53.060 [2024-10-08 19:07:21.689499] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81561 ] 00:39:53.319 [2024-10-08 19:07:21.871493] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:53.577 [2024-10-08 19:07:22.074880] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:39:54.953  [2024-10-08T19:07:24.646Z] Copying: 243/1024 [MB] (243 MBps) [2024-10-08T19:07:25.582Z] Copying: 483/1024 [MB] (240 MBps) [2024-10-08T19:07:26.962Z] Copying: 727/1024 [MB] (244 MBps) [2024-10-08T19:07:26.962Z] Copying: 968/1024 [MB] (241 MBps) [2024-10-08T19:07:28.340Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:39:59.583 00:39:59.583 Calculate MD5 checksum, iteration 1 00:39:59.583 19:07:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:39:59.583 19:07:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:39:59.583 19:07:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:39:59.583 19:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:39:59.583 19:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:39:59.583 19:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:39:59.583 19:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:39:59.583 19:07:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:39:59.583 [2024-10-08 19:07:28.206414] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:39:59.583 [2024-10-08 19:07:28.206600] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81625 ] 00:39:59.841 [2024-10-08 19:07:28.379611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:59.841 [2024-10-08 19:07:28.574018] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:01.750  [2024-10-08T19:07:30.765Z] Copying: 660/1024 [MB] (660 MBps) [2024-10-08T19:07:32.139Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:40:03.382 00:40:03.382 19:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:40:03.382 19:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:40:05.287 Fill FTL, iteration 2 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=75c68b1bf449b8f2f2b75209468bdbe6 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:40:05.287 19:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:40:05.287 [2024-10-08 19:07:33.698764] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:05.287 [2024-10-08 19:07:33.699095] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81681 ] 00:40:05.287 [2024-10-08 19:07:33.873205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.546 [2024-10-08 19:07:34.163681] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:06.920  [2024-10-08T19:07:37.054Z] Copying: 235/1024 [MB] (235 MBps) [2024-10-08T19:07:37.621Z] Copying: 461/1024 [MB] (226 MBps) [2024-10-08T19:07:38.996Z] Copying: 684/1024 [MB] (223 MBps) [2024-10-08T19:07:39.254Z] Copying: 915/1024 [MB] (231 MBps) [2024-10-08T19:07:40.632Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:40:11.875 00:40:11.875 19:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:40:11.875 19:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:40:11.875 Calculate MD5 checksum, iteration 2 00:40:11.875 19:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:40:11.875 19:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:40:11.875 19:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:40:11.875 19:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:40:11.875 19:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:40:11.875 19:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:40:11.875 [2024-10-08 19:07:40.523715] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:11.875 [2024-10-08 19:07:40.523836] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81756 ] 00:40:12.134 [2024-10-08 19:07:40.688430] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:12.393 [2024-10-08 19:07:40.899731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:14.296  [2024-10-08T19:07:43.312Z] Copying: 671/1024 [MB] (671 MBps) [2024-10-08T19:07:44.689Z] Copying: 1024/1024 [MB] (average 656 MBps) 00:40:15.932 00:40:15.932 19:07:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:40:15.932 19:07:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:40:17.834 19:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:40:17.835 19:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=11d116c08b950289769ad06320a9b1e2 00:40:17.835 19:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:40:17.835 19:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:40:17.835 19:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:40:18.094 [2024-10-08 19:07:46.747664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:18.094 [2024-10-08 19:07:46.747977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:40:18.094 [2024-10-08 19:07:46.748006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:40:18.094 [2024-10-08 19:07:46.748024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:18.094 [2024-10-08 19:07:46.748071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:18.094 [2024-10-08 19:07:46.748084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:40:18.094 [2024-10-08 19:07:46.748095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:40:18.094 [2024-10-08 19:07:46.748105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:18.094 [2024-10-08 19:07:46.748127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:18.094 [2024-10-08 19:07:46.748138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:40:18.094 [2024-10-08 19:07:46.748149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:40:18.094 [2024-10-08 19:07:46.748159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:18.094 [2024-10-08 19:07:46.748228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.565 ms, result 0 00:40:18.094 true 00:40:18.094 19:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:40:18.352 { 00:40:18.352 "name": "ftl", 00:40:18.352 "properties": [ 00:40:18.352 { 00:40:18.352 "name": "superblock_version", 00:40:18.352 "value": 5, 00:40:18.352 "read-only": true 00:40:18.352 }, 00:40:18.352 { 00:40:18.352 "name": "base_device", 00:40:18.352 "bands": [ 00:40:18.352 { 00:40:18.352 "id": 0, 00:40:18.352 "state": "FREE", 00:40:18.352 "validity": 0.0 00:40:18.352 }, 00:40:18.352 { 00:40:18.352 "id": 1, 00:40:18.352 "state": "FREE", 00:40:18.352 "validity": 0.0 00:40:18.352 }, 00:40:18.352 { 00:40:18.352 "id": 2, 00:40:18.352 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 3, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 4, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 5, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 6, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 7, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 8, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 9, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 10, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 11, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 12, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 13, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 14, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 15, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 16, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 17, 00:40:18.353 "state": "FREE", 00:40:18.353 "validity": 0.0 00:40:18.353 } 00:40:18.353 ], 00:40:18.353 "read-only": true 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "name": "cache_device", 00:40:18.353 "type": "bdev", 00:40:18.353 "chunks": [ 00:40:18.353 { 00:40:18.353 "id": 0, 00:40:18.353 "state": "INACTIVE", 00:40:18.353 "utilization": 0.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 1, 00:40:18.353 "state": "CLOSED", 00:40:18.353 "utilization": 1.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 2, 00:40:18.353 "state": "CLOSED", 00:40:18.353 "utilization": 1.0 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 3, 00:40:18.353 "state": "OPEN", 00:40:18.353 "utilization": 0.001953125 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "id": 4, 00:40:18.353 "state": "OPEN", 00:40:18.353 "utilization": 0.0 00:40:18.353 } 00:40:18.353 ], 00:40:18.353 "read-only": true 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "name": "verbose_mode", 00:40:18.353 "value": true, 00:40:18.353 "unit": "", 00:40:18.353 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:40:18.353 }, 00:40:18.353 { 00:40:18.353 "name": "prep_upgrade_on_shutdown", 00:40:18.353 "value": false, 00:40:18.353 "unit": "", 00:40:18.353 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:40:18.353 } 00:40:18.353 ] 00:40:18.353 } 00:40:18.353 19:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:40:18.612 [2024-10-08 19:07:47.296124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:18.612 [2024-10-08 19:07:47.296400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:40:18.612 [2024-10-08 19:07:47.296498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:40:18.612 [2024-10-08 19:07:47.296537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:18.612 [2024-10-08 19:07:47.296603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:18.612 [2024-10-08 19:07:47.296637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:40:18.612 [2024-10-08 19:07:47.296667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:40:18.612 [2024-10-08 19:07:47.296698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:18.612 [2024-10-08 19:07:47.296801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:18.612 [2024-10-08 19:07:47.296839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:40:18.612 [2024-10-08 19:07:47.296871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:40:18.612 [2024-10-08 19:07:47.296901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:18.612 [2024-10-08 19:07:47.297000] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.853 ms, result 0 00:40:18.612 true 00:40:18.612 19:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:40:18.612 19:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:40:18.612 19:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:40:18.871 19:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:40:18.871 19:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:40:18.871 19:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:40:19.129 [2024-10-08 19:07:47.796599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:19.129 [2024-10-08 19:07:47.796835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:40:19.129 [2024-10-08 19:07:47.796859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:40:19.129 [2024-10-08 19:07:47.796871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:19.129 [2024-10-08 19:07:47.796914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:19.129 [2024-10-08 19:07:47.796927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:40:19.129 [2024-10-08 19:07:47.796938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:40:19.129 [2024-10-08 19:07:47.796947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:19.129 [2024-10-08 19:07:47.796985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:19.129 [2024-10-08 19:07:47.796997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:40:19.129 [2024-10-08 19:07:47.797008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:40:19.129 [2024-10-08 19:07:47.797018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:19.129 [2024-10-08 19:07:47.797085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.470 ms, result 0 00:40:19.129 true 00:40:19.129 19:07:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:40:19.388 { 00:40:19.388 "name": "ftl", 00:40:19.388 "properties": [ 00:40:19.388 { 00:40:19.388 "name": "superblock_version", 00:40:19.388 "value": 5, 00:40:19.388 "read-only": true 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "name": "base_device", 00:40:19.388 "bands": [ 00:40:19.388 { 00:40:19.388 "id": 0, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "id": 1, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "id": 2, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "id": 3, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "id": 4, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "id": 5, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "id": 6, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "id": 7, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.388 { 00:40:19.388 "id": 8, 00:40:19.388 "state": "FREE", 00:40:19.388 "validity": 0.0 00:40:19.388 }, 00:40:19.389 { 00:40:19.389 "id": 9, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 10, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 11, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 12, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 13, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 14, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 15, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 16, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 17, 00:40:19.389 "state": "FREE", 00:40:19.389 "validity": 0.0 00:40:19.389 } 00:40:19.389 ], 00:40:19.389 "read-only": true 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "name": "cache_device", 00:40:19.389 "type": "bdev", 00:40:19.389 "chunks": [ 00:40:19.389 { 00:40:19.389 "id": 0, 00:40:19.389 "state": "INACTIVE", 00:40:19.389 "utilization": 0.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 1, 00:40:19.389 "state": "CLOSED", 00:40:19.389 "utilization": 1.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 2, 00:40:19.389 "state": "CLOSED", 00:40:19.389 "utilization": 1.0 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 3, 00:40:19.389 "state": "OPEN", 00:40:19.389 "utilization": 0.001953125 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "id": 4, 00:40:19.389 "state": "OPEN", 00:40:19.389 "utilization": 0.0 00:40:19.389 } 00:40:19.389 ], 00:40:19.389 "read-only": true 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "name": "verbose_mode", 00:40:19.389 "value": true, 00:40:19.389 "unit": "", 00:40:19.389 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:40:19.389 }, 00:40:19.389 { 00:40:19.389 "name": "prep_upgrade_on_shutdown", 00:40:19.389 "value": true, 00:40:19.389 "unit": "", 00:40:19.389 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:40:19.389 } 00:40:19.389 ] 00:40:19.389 } 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81375 ]] 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81375 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81375 ']' 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81375 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81375 00:40:19.389 killing process with pid 81375 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81375' 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81375 00:40:19.389 19:07:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81375 00:40:20.765 [2024-10-08 19:07:49.201148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:40:20.765 [2024-10-08 19:07:49.221412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:20.765 [2024-10-08 19:07:49.221450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:40:20.765 [2024-10-08 19:07:49.221465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:40:20.765 [2024-10-08 19:07:49.221491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:20.765 [2024-10-08 19:07:49.221517] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:40:20.765 [2024-10-08 19:07:49.225631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:20.765 [2024-10-08 19:07:49.225666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:40:20.765 [2024-10-08 19:07:49.225678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.098 ms 00:40:20.765 [2024-10-08 19:07:49.225687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.884 [2024-10-08 19:07:56.544804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.884 [2024-10-08 19:07:56.544874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:40:28.884 [2024-10-08 19:07:56.544891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7319.042 ms 00:40:28.884 [2024-10-08 19:07:56.544902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.884 [2024-10-08 19:07:56.546023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.884 [2024-10-08 19:07:56.546065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:40:28.884 [2024-10-08 19:07:56.546078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.101 ms 00:40:28.884 [2024-10-08 19:07:56.546089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.884 [2024-10-08 19:07:56.547041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.884 [2024-10-08 19:07:56.547068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:40:28.884 [2024-10-08 19:07:56.547081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.911 ms 00:40:28.884 [2024-10-08 19:07:56.547092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.884 [2024-10-08 19:07:56.562571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.884 [2024-10-08 19:07:56.562638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:40:28.884 [2024-10-08 19:07:56.562654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.436 ms 00:40:28.884 [2024-10-08 19:07:56.562664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.884 [2024-10-08 19:07:56.572934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.884 [2024-10-08 19:07:56.573006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:40:28.884 [2024-10-08 19:07:56.573039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.221 ms 00:40:28.884 [2024-10-08 19:07:56.573060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.884 [2024-10-08 19:07:56.573191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.884 [2024-10-08 19:07:56.573205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:40:28.884 [2024-10-08 19:07:56.573216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:40:28.884 [2024-10-08 19:07:56.573226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.884 [2024-10-08 19:07:56.588800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.884 [2024-10-08 19:07:56.589106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:40:28.884 [2024-10-08 19:07:56.589132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.551 ms 00:40:28.885 [2024-10-08 19:07:56.589144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.605313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.885 [2024-10-08 19:07:56.605378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:40:28.885 [2024-10-08 19:07:56.605392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.071 ms 00:40:28.885 [2024-10-08 19:07:56.605419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.621129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.885 [2024-10-08 19:07:56.621190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:40:28.885 [2024-10-08 19:07:56.621205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.661 ms 00:40:28.885 [2024-10-08 19:07:56.621215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.636666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.885 [2024-10-08 19:07:56.636727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:40:28.885 [2024-10-08 19:07:56.636742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.343 ms 00:40:28.885 [2024-10-08 19:07:56.636751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.636789] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:40:28.885 [2024-10-08 19:07:56.636808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:40:28.885 [2024-10-08 19:07:56.636821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:40:28.885 [2024-10-08 19:07:56.636833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:40:28.885 [2024-10-08 19:07:56.636843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.636975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.637003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.637014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.637025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.637035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.637046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:28.885 [2024-10-08 19:07:56.637059] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:40:28.885 [2024-10-08 19:07:56.637070] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 86c06e3e-1e42-4b15-b343-0a472f3d71a8 00:40:28.885 [2024-10-08 19:07:56.637081] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:40:28.885 [2024-10-08 19:07:56.637095] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:40:28.885 [2024-10-08 19:07:56.637117] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:40:28.885 [2024-10-08 19:07:56.637128] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:40:28.885 [2024-10-08 19:07:56.637143] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:40:28.885 [2024-10-08 19:07:56.637153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:40:28.885 [2024-10-08 19:07:56.637163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:40:28.885 [2024-10-08 19:07:56.637172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:40:28.885 [2024-10-08 19:07:56.637183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:40:28.885 [2024-10-08 19:07:56.637200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.885 [2024-10-08 19:07:56.637215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:40:28.885 [2024-10-08 19:07:56.637227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.411 ms 00:40:28.885 [2024-10-08 19:07:56.637238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.658183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.885 [2024-10-08 19:07:56.658403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:40:28.885 [2024-10-08 19:07:56.658429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.907 ms 00:40:28.885 [2024-10-08 19:07:56.658441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.659037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:28.885 [2024-10-08 19:07:56.659053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:40:28.885 [2024-10-08 19:07:56.659065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:40:28.885 [2024-10-08 19:07:56.659076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.718558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.718637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:40:28.885 [2024-10-08 19:07:56.718652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.718662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.718718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.718730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:40:28.885 [2024-10-08 19:07:56.718741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.718751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.718858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.718873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:40:28.885 [2024-10-08 19:07:56.718884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.718894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.718914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.718924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:40:28.885 [2024-10-08 19:07:56.718935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.718945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.842241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.842525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:40:28.885 [2024-10-08 19:07:56.842549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.842560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.945666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.945734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:40:28.885 [2024-10-08 19:07:56.945749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.945760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.945871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.945903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:40:28.885 [2024-10-08 19:07:56.945914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.945924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.946010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.946024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:40:28.885 [2024-10-08 19:07:56.946035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.946045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.946189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.946204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:40:28.885 [2024-10-08 19:07:56.946220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.946231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.946273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.946285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:40:28.885 [2024-10-08 19:07:56.946296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.946306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.946345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.946357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:40:28.885 [2024-10-08 19:07:56.946371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.946381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.946425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:40:28.885 [2024-10-08 19:07:56.946444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:40:28.885 [2024-10-08 19:07:56.946454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:40:28.885 [2024-10-08 19:07:56.946464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:28.885 [2024-10-08 19:07:56.946586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7725.100 ms, result 0 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81963 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81963 00:40:32.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81963 ']' 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:32.176 19:08:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:32.176 [2024-10-08 19:08:00.578856] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:32.176 [2024-10-08 19:08:00.579206] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81963 ] 00:40:32.176 [2024-10-08 19:08:00.742453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.434 [2024-10-08 19:08:00.953544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.371 [2024-10-08 19:08:01.916485] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:40:33.371 [2024-10-08 19:08:01.916552] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:40:33.371 [2024-10-08 19:08:02.063499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.063559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:40:33.371 [2024-10-08 19:08:02.063576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:40:33.371 [2024-10-08 19:08:02.063587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.063644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.063658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:40:33.371 [2024-10-08 19:08:02.063669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:40:33.371 [2024-10-08 19:08:02.063680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.063714] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:40:33.371 [2024-10-08 19:08:02.064725] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:40:33.371 [2024-10-08 19:08:02.064754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.064765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:40:33.371 [2024-10-08 19:08:02.064776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.055 ms 00:40:33.371 [2024-10-08 19:08:02.064789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.066257] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:40:33.371 [2024-10-08 19:08:02.085702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.085738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:40:33.371 [2024-10-08 19:08:02.085753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.445 ms 00:40:33.371 [2024-10-08 19:08:02.085779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.085842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.085854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:40:33.371 [2024-10-08 19:08:02.085866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:40:33.371 [2024-10-08 19:08:02.085875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.092692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.092878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:40:33.371 [2024-10-08 19:08:02.092900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.713 ms 00:40:33.371 [2024-10-08 19:08:02.092911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.092999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.093014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:40:33.371 [2024-10-08 19:08:02.093025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:40:33.371 [2024-10-08 19:08:02.093039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.093091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.093103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:40:33.371 [2024-10-08 19:08:02.093114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:40:33.371 [2024-10-08 19:08:02.093125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.093153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:40:33.371 [2024-10-08 19:08:02.097894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.097925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:40:33.371 [2024-10-08 19:08:02.097937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.749 ms 00:40:33.371 [2024-10-08 19:08:02.097962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.098017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.371 [2024-10-08 19:08:02.098028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:40:33.371 [2024-10-08 19:08:02.098043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:40:33.371 [2024-10-08 19:08:02.098054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.371 [2024-10-08 19:08:02.098112] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:40:33.372 [2024-10-08 19:08:02.098136] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:40:33.372 [2024-10-08 19:08:02.098171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:40:33.372 [2024-10-08 19:08:02.098189] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:40:33.372 [2024-10-08 19:08:02.098282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:40:33.372 [2024-10-08 19:08:02.098299] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:40:33.372 [2024-10-08 19:08:02.098312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:40:33.372 [2024-10-08 19:08:02.098325] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:40:33.372 [2024-10-08 19:08:02.098337] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:40:33.372 [2024-10-08 19:08:02.098348] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:40:33.372 [2024-10-08 19:08:02.098359] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:40:33.372 [2024-10-08 19:08:02.098368] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:40:33.372 [2024-10-08 19:08:02.098378] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:40:33.372 [2024-10-08 19:08:02.098389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.372 [2024-10-08 19:08:02.098400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:40:33.372 [2024-10-08 19:08:02.098410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:40:33.372 [2024-10-08 19:08:02.098423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.372 [2024-10-08 19:08:02.098501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.372 [2024-10-08 19:08:02.098512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:40:33.372 [2024-10-08 19:08:02.098523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:40:33.372 [2024-10-08 19:08:02.098533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.372 [2024-10-08 19:08:02.098625] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:40:33.372 [2024-10-08 19:08:02.098637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:40:33.372 [2024-10-08 19:08:02.098649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:40:33.372 [2024-10-08 19:08:02.098659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.098673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:40:33.372 [2024-10-08 19:08:02.098682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.098692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:40:33.372 [2024-10-08 19:08:02.098701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:40:33.372 [2024-10-08 19:08:02.098712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:40:33.372 [2024-10-08 19:08:02.098721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.098731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:40:33.372 [2024-10-08 19:08:02.098741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:40:33.372 [2024-10-08 19:08:02.098750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.098760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:40:33.372 [2024-10-08 19:08:02.098769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:40:33.372 [2024-10-08 19:08:02.098778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.098787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:40:33.372 [2024-10-08 19:08:02.098796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:40:33.372 [2024-10-08 19:08:02.098805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.098814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:40:33.372 [2024-10-08 19:08:02.098823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:40:33.372 [2024-10-08 19:08:02.098832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:33.372 [2024-10-08 19:08:02.098841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:40:33.372 [2024-10-08 19:08:02.098851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:40:33.372 [2024-10-08 19:08:02.098871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:33.372 [2024-10-08 19:08:02.098880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:40:33.372 [2024-10-08 19:08:02.098890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:40:33.372 [2024-10-08 19:08:02.098899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:33.372 [2024-10-08 19:08:02.098908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:40:33.372 [2024-10-08 19:08:02.098918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:40:33.372 [2024-10-08 19:08:02.098928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:33.372 [2024-10-08 19:08:02.098937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:40:33.372 [2024-10-08 19:08:02.098946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:40:33.372 [2024-10-08 19:08:02.098955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.098965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:40:33.372 [2024-10-08 19:08:02.098992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:40:33.372 [2024-10-08 19:08:02.099002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.099012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:40:33.372 [2024-10-08 19:08:02.099022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:40:33.372 [2024-10-08 19:08:02.099031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.099041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:40:33.372 [2024-10-08 19:08:02.099050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:40:33.372 [2024-10-08 19:08:02.099060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.099070] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:40:33.372 [2024-10-08 19:08:02.099080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:40:33.372 [2024-10-08 19:08:02.099090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:40:33.372 [2024-10-08 19:08:02.099100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:33.372 [2024-10-08 19:08:02.099110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:40:33.372 [2024-10-08 19:08:02.099120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:40:33.372 [2024-10-08 19:08:02.099129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:40:33.372 [2024-10-08 19:08:02.099139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:40:33.372 [2024-10-08 19:08:02.099148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:40:33.372 [2024-10-08 19:08:02.099157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:40:33.372 [2024-10-08 19:08:02.099168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:40:33.372 [2024-10-08 19:08:02.099180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:40:33.372 [2024-10-08 19:08:02.099202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:40:33.372 [2024-10-08 19:08:02.099234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:40:33.372 [2024-10-08 19:08:02.099245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:40:33.372 [2024-10-08 19:08:02.099255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:40:33.372 [2024-10-08 19:08:02.099266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:40:33.372 [2024-10-08 19:08:02.099338] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:40:33.372 [2024-10-08 19:08:02.099349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:33.372 [2024-10-08 19:08:02.099381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:40:33.372 [2024-10-08 19:08:02.099392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:40:33.372 [2024-10-08 19:08:02.099406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:40:33.372 [2024-10-08 19:08:02.099418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:33.372 [2024-10-08 19:08:02.099428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:40:33.372 [2024-10-08 19:08:02.099439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:40:33.373 [2024-10-08 19:08:02.099453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:33.373 [2024-10-08 19:08:02.099501] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:40:33.373 [2024-10-08 19:08:02.099514] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:40:36.658 [2024-10-08 19:08:05.024929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.024996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:40:36.658 [2024-10-08 19:08:05.025013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2925.409 ms 00:40:36.658 [2024-10-08 19:08:05.025049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.063030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.063086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:40:36.658 [2024-10-08 19:08:05.063102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.668 ms 00:40:36.658 [2024-10-08 19:08:05.063113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.063213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.063226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:40:36.658 [2024-10-08 19:08:05.063237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:40:36.658 [2024-10-08 19:08:05.063248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.118802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.118860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:40:36.658 [2024-10-08 19:08:05.118876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.508 ms 00:40:36.658 [2024-10-08 19:08:05.118903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.118952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.118963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:40:36.658 [2024-10-08 19:08:05.118992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:40:36.658 [2024-10-08 19:08:05.119003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.119535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.119557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:40:36.658 [2024-10-08 19:08:05.119576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.435 ms 00:40:36.658 [2024-10-08 19:08:05.119586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.119633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.119644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:40:36.658 [2024-10-08 19:08:05.119656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:40:36.658 [2024-10-08 19:08:05.119666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.139004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.139251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:40:36.658 [2024-10-08 19:08:05.139277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.314 ms 00:40:36.658 [2024-10-08 19:08:05.139289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.159150] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:40:36.658 [2024-10-08 19:08:05.159219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:40:36.658 [2024-10-08 19:08:05.159252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.159265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:40:36.658 [2024-10-08 19:08:05.159278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.774 ms 00:40:36.658 [2024-10-08 19:08:05.159288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.179948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.179999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:40:36.658 [2024-10-08 19:08:05.180014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.572 ms 00:40:36.658 [2024-10-08 19:08:05.180024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.198125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.198162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:40:36.658 [2024-10-08 19:08:05.198176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.037 ms 00:40:36.658 [2024-10-08 19:08:05.198202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.216428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.216465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:40:36.658 [2024-10-08 19:08:05.216479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.185 ms 00:40:36.658 [2024-10-08 19:08:05.216489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.217336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.217360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:40:36.658 [2024-10-08 19:08:05.217372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.742 ms 00:40:36.658 [2024-10-08 19:08:05.217382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.307128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.307193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:40:36.658 [2024-10-08 19:08:05.307210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 89.720 ms 00:40:36.658 [2024-10-08 19:08:05.307221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.658 [2024-10-08 19:08:05.318167] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:40:36.658 [2024-10-08 19:08:05.319082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.658 [2024-10-08 19:08:05.319110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:40:36.658 [2024-10-08 19:08:05.319128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.791 ms 00:40:36.659 [2024-10-08 19:08:05.319138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.659 [2024-10-08 19:08:05.319225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.659 [2024-10-08 19:08:05.319239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:40:36.659 [2024-10-08 19:08:05.319250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:40:36.659 [2024-10-08 19:08:05.319260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.659 [2024-10-08 19:08:05.319325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.659 [2024-10-08 19:08:05.319337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:40:36.659 [2024-10-08 19:08:05.319349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:40:36.659 [2024-10-08 19:08:05.319362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.659 [2024-10-08 19:08:05.319395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.659 [2024-10-08 19:08:05.319406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:40:36.659 [2024-10-08 19:08:05.319417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:40:36.659 [2024-10-08 19:08:05.319427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.659 [2024-10-08 19:08:05.319464] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:40:36.659 [2024-10-08 19:08:05.319477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.659 [2024-10-08 19:08:05.319487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:40:36.659 [2024-10-08 19:08:05.319497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:40:36.659 [2024-10-08 19:08:05.319507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.659 [2024-10-08 19:08:05.357072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.659 [2024-10-08 19:08:05.357113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:40:36.659 [2024-10-08 19:08:05.357128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.538 ms 00:40:36.659 [2024-10-08 19:08:05.357155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.659 [2024-10-08 19:08:05.357236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:36.659 [2024-10-08 19:08:05.357249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:40:36.659 [2024-10-08 19:08:05.357261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:40:36.659 [2024-10-08 19:08:05.357283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:36.659 [2024-10-08 19:08:05.358400] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3294.409 ms, result 0 00:40:36.659 [2024-10-08 19:08:05.373451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:36.659 [2024-10-08 19:08:05.389482] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:40:36.659 [2024-10-08 19:08:05.398852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:36.917 19:08:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:36.917 19:08:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:40:36.917 19:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:40:36.917 19:08:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:40:36.917 19:08:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:40:37.186 [2024-10-08 19:08:05.799096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:37.186 [2024-10-08 19:08:05.799163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:40:37.186 [2024-10-08 19:08:05.799179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:40:37.186 [2024-10-08 19:08:05.799205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:37.186 [2024-10-08 19:08:05.799233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:37.186 [2024-10-08 19:08:05.799244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:40:37.186 [2024-10-08 19:08:05.799255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:40:37.186 [2024-10-08 19:08:05.799265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:37.186 [2024-10-08 19:08:05.799286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:37.186 [2024-10-08 19:08:05.799300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:40:37.186 [2024-10-08 19:08:05.799311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:40:37.186 [2024-10-08 19:08:05.799321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:37.186 [2024-10-08 19:08:05.799389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.282 ms, result 0 00:40:37.186 true 00:40:37.186 19:08:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:40:37.458 { 00:40:37.458 "name": "ftl", 00:40:37.458 "properties": [ 00:40:37.458 { 00:40:37.458 "name": "superblock_version", 00:40:37.458 "value": 5, 00:40:37.458 "read-only": true 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "name": "base_device", 00:40:37.459 "bands": [ 00:40:37.459 { 00:40:37.459 "id": 0, 00:40:37.459 "state": "CLOSED", 00:40:37.459 "validity": 1.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 1, 00:40:37.459 "state": "CLOSED", 00:40:37.459 "validity": 1.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 2, 00:40:37.459 "state": "CLOSED", 00:40:37.459 "validity": 0.007843137254901933 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 3, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 4, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 5, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 6, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 7, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 8, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 9, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 10, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 11, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 12, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 13, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 14, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 15, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 16, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 17, 00:40:37.459 "state": "FREE", 00:40:37.459 "validity": 0.0 00:40:37.459 } 00:40:37.459 ], 00:40:37.459 "read-only": true 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "name": "cache_device", 00:40:37.459 "type": "bdev", 00:40:37.459 "chunks": [ 00:40:37.459 { 00:40:37.459 "id": 0, 00:40:37.459 "state": "INACTIVE", 00:40:37.459 "utilization": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 1, 00:40:37.459 "state": "OPEN", 00:40:37.459 "utilization": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 2, 00:40:37.459 "state": "OPEN", 00:40:37.459 "utilization": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 3, 00:40:37.459 "state": "FREE", 00:40:37.459 "utilization": 0.0 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "id": 4, 00:40:37.459 "state": "FREE", 00:40:37.459 "utilization": 0.0 00:40:37.459 } 00:40:37.459 ], 00:40:37.459 "read-only": true 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "name": "verbose_mode", 00:40:37.459 "value": true, 00:40:37.459 "unit": "", 00:40:37.459 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:40:37.459 }, 00:40:37.459 { 00:40:37.459 "name": "prep_upgrade_on_shutdown", 00:40:37.459 "value": false, 00:40:37.459 "unit": "", 00:40:37.459 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:40:37.459 } 00:40:37.459 ] 00:40:37.459 } 00:40:37.459 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:40:37.459 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:40:37.459 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:40:37.718 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:40:37.718 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:40:37.718 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:40:37.718 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:40:37.718 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:40:37.976 Validate MD5 checksum, iteration 1 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:40:37.976 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:40:37.977 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:40:37.977 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:40:37.977 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:40:37.977 19:08:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:40:37.977 [2024-10-08 19:08:06.717009] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:37.977 [2024-10-08 19:08:06.717612] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82039 ] 00:40:38.234 [2024-10-08 19:08:06.888597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.800 [2024-10-08 19:08:07.248575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.716  [2024-10-08T19:08:09.731Z] Copying: 679/1024 [MB] (679 MBps) [2024-10-08T19:08:11.634Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:40:42.878 00:40:42.878 19:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:40:42.878 19:08:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:40:44.252 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:40:44.511 Validate MD5 checksum, iteration 2 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=75c68b1bf449b8f2f2b75209468bdbe6 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 75c68b1bf449b8f2f2b75209468bdbe6 != \7\5\c\6\8\b\1\b\f\4\4\9\b\8\f\2\f\2\b\7\5\2\0\9\4\6\8\b\d\b\e\6 ]] 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:40:44.511 19:08:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:40:44.511 [2024-10-08 19:08:13.090583] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:44.511 [2024-10-08 19:08:13.090935] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82106 ] 00:40:44.511 [2024-10-08 19:08:13.258239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:44.828 [2024-10-08 19:08:13.524423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:46.759  [2024-10-08T19:08:16.086Z] Copying: 632/1024 [MB] (632 MBps) [2024-10-08T19:08:19.373Z] Copying: 1024/1024 [MB] (average 610 MBps) 00:40:50.616 00:40:50.616 19:08:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:40:50.616 19:08:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=11d116c08b950289769ad06320a9b1e2 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 11d116c08b950289769ad06320a9b1e2 != \1\1\d\1\1\6\c\0\8\b\9\5\0\2\8\9\7\6\9\a\d\0\6\3\2\0\a\9\b\1\e\2 ]] 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81963 ]] 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81963 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:40:51.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82184 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82184 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82184 ']' 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:51.990 19:08:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:40:51.990 [2024-10-08 19:08:20.743899] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:51.990 [2024-10-08 19:08:20.744353] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82184 ] 00:40:52.248 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81963 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:40:52.248 [2024-10-08 19:08:20.911885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:52.508 [2024-10-08 19:08:21.250094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:40:53.891 [2024-10-08 19:08:22.251572] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:40:53.891 [2024-10-08 19:08:22.251827] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:40:53.891 [2024-10-08 19:08:22.399259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.399589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:40:53.892 [2024-10-08 19:08:22.399737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:40:53.892 [2024-10-08 19:08:22.399787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.399916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.400152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:40:53.892 [2024-10-08 19:08:22.400177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:40:53.892 [2024-10-08 19:08:22.400194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.400252] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:40:53.892 [2024-10-08 19:08:22.401363] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:40:53.892 [2024-10-08 19:08:22.401396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.401408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:40:53.892 [2024-10-08 19:08:22.401420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.167 ms 00:40:53.892 [2024-10-08 19:08:22.401435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.401914] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:40:53.892 [2024-10-08 19:08:22.427803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.428133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:40:53.892 [2024-10-08 19:08:22.428180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.886 ms 00:40:53.892 [2024-10-08 19:08:22.428193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.442941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.442989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:40:53.892 [2024-10-08 19:08:22.443002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:40:53.892 [2024-10-08 19:08:22.443029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.443588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.443612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:40:53.892 [2024-10-08 19:08:22.443623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.440 ms 00:40:53.892 [2024-10-08 19:08:22.443634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.443699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.443712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:40:53.892 [2024-10-08 19:08:22.443723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:40:53.892 [2024-10-08 19:08:22.443734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.443770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.443781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:40:53.892 [2024-10-08 19:08:22.443797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:40:53.892 [2024-10-08 19:08:22.443812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.443862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:40:53.892 [2024-10-08 19:08:22.448305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.448342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:40:53.892 [2024-10-08 19:08:22.448355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.469 ms 00:40:53.892 [2024-10-08 19:08:22.448365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.448403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.448414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:40:53.892 [2024-10-08 19:08:22.448425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:40:53.892 [2024-10-08 19:08:22.448435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.448491] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:40:53.892 [2024-10-08 19:08:22.448516] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:40:53.892 [2024-10-08 19:08:22.448557] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:40:53.892 [2024-10-08 19:08:22.448576] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:40:53.892 [2024-10-08 19:08:22.448666] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:40:53.892 [2024-10-08 19:08:22.448679] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:40:53.892 [2024-10-08 19:08:22.448692] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:40:53.892 [2024-10-08 19:08:22.448706] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:40:53.892 [2024-10-08 19:08:22.448718] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:40:53.892 [2024-10-08 19:08:22.448729] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:40:53.892 [2024-10-08 19:08:22.448742] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:40:53.892 [2024-10-08 19:08:22.448752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:40:53.892 [2024-10-08 19:08:22.448762] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:40:53.892 [2024-10-08 19:08:22.448773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.448783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:40:53.892 [2024-10-08 19:08:22.448793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:40:53.892 [2024-10-08 19:08:22.448803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.448877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.892 [2024-10-08 19:08:22.448888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:40:53.892 [2024-10-08 19:08:22.448899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:40:53.892 [2024-10-08 19:08:22.448912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.892 [2024-10-08 19:08:22.449027] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:40:53.892 [2024-10-08 19:08:22.449041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:40:53.892 [2024-10-08 19:08:22.449052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:40:53.892 [2024-10-08 19:08:22.449062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:40:53.892 [2024-10-08 19:08:22.449082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:40:53.892 [2024-10-08 19:08:22.449101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:40:53.892 [2024-10-08 19:08:22.449112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:40:53.892 [2024-10-08 19:08:22.449121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:40:53.892 [2024-10-08 19:08:22.449141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:40:53.892 [2024-10-08 19:08:22.449151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:40:53.892 [2024-10-08 19:08:22.449169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:40:53.892 [2024-10-08 19:08:22.449178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:40:53.892 [2024-10-08 19:08:22.449196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:40:53.892 [2024-10-08 19:08:22.449205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:40:53.892 [2024-10-08 19:08:22.449225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:40:53.892 [2024-10-08 19:08:22.449234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:53.892 [2024-10-08 19:08:22.449265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:40:53.892 [2024-10-08 19:08:22.449275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:40:53.892 [2024-10-08 19:08:22.449284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:53.892 [2024-10-08 19:08:22.449293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:40:53.892 [2024-10-08 19:08:22.449302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:40:53.892 [2024-10-08 19:08:22.449311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:53.892 [2024-10-08 19:08:22.449320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:40:53.892 [2024-10-08 19:08:22.449329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:40:53.892 [2024-10-08 19:08:22.449339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:40:53.892 [2024-10-08 19:08:22.449348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:40:53.892 [2024-10-08 19:08:22.449357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:40:53.892 [2024-10-08 19:08:22.449366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:40:53.892 [2024-10-08 19:08:22.449385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:40:53.892 [2024-10-08 19:08:22.449394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:40:53.892 [2024-10-08 19:08:22.449411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:40:53.892 [2024-10-08 19:08:22.449438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:40:53.892 [2024-10-08 19:08:22.449447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449456] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:40:53.892 [2024-10-08 19:08:22.449466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:40:53.892 [2024-10-08 19:08:22.449476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:40:53.892 [2024-10-08 19:08:22.449486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:40:53.892 [2024-10-08 19:08:22.449496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:40:53.893 [2024-10-08 19:08:22.449505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:40:53.893 [2024-10-08 19:08:22.449514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:40:53.893 [2024-10-08 19:08:22.449524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:40:53.893 [2024-10-08 19:08:22.449533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:40:53.893 [2024-10-08 19:08:22.449542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:40:53.893 [2024-10-08 19:08:22.449553] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:40:53.893 [2024-10-08 19:08:22.449569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:40:53.893 [2024-10-08 19:08:22.449590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:40:53.893 [2024-10-08 19:08:22.449621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:40:53.893 [2024-10-08 19:08:22.449632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:40:53.893 [2024-10-08 19:08:22.449658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:40:53.893 [2024-10-08 19:08:22.449669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:40:53.893 [2024-10-08 19:08:22.449742] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:40:53.893 [2024-10-08 19:08:22.449754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:53.893 [2024-10-08 19:08:22.449778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:40:53.893 [2024-10-08 19:08:22.449788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:40:53.893 [2024-10-08 19:08:22.449799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:40:53.893 [2024-10-08 19:08:22.449810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.449821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:40:53.893 [2024-10-08 19:08:22.449832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.860 ms 00:40:53.893 [2024-10-08 19:08:22.449842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.488007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.488069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:40:53.893 [2024-10-08 19:08:22.488085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.064 ms 00:40:53.893 [2024-10-08 19:08:22.488095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.488159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.488170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:40:53.893 [2024-10-08 19:08:22.488182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:40:53.893 [2024-10-08 19:08:22.488196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.545632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.545682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:40:53.893 [2024-10-08 19:08:22.545698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.354 ms 00:40:53.893 [2024-10-08 19:08:22.545709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.545780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.545792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:40:53.893 [2024-10-08 19:08:22.545803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:40:53.893 [2024-10-08 19:08:22.545814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.545980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.545995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:40:53.893 [2024-10-08 19:08:22.546007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:40:53.893 [2024-10-08 19:08:22.546017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.546062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.546078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:40:53.893 [2024-10-08 19:08:22.546089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:40:53.893 [2024-10-08 19:08:22.546099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.567252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.567296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:40:53.893 [2024-10-08 19:08:22.567311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.128 ms 00:40:53.893 [2024-10-08 19:08:22.567322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.567500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.567525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:40:53.893 [2024-10-08 19:08:22.567537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:40:53.893 [2024-10-08 19:08:22.567548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.594307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.594476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:40:53.893 [2024-10-08 19:08:22.594575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.726 ms 00:40:53.893 [2024-10-08 19:08:22.594613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:53.893 [2024-10-08 19:08:22.609681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:53.893 [2024-10-08 19:08:22.609814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:40:53.893 [2024-10-08 19:08:22.609927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.734 ms 00:40:53.893 [2024-10-08 19:08:22.609985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.152 [2024-10-08 19:08:22.699677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.152 [2024-10-08 19:08:22.699748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:40:54.152 [2024-10-08 19:08:22.699766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 89.478 ms 00:40:54.152 [2024-10-08 19:08:22.699777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.152 [2024-10-08 19:08:22.700014] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:40:54.152 [2024-10-08 19:08:22.700163] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:40:54.152 [2024-10-08 19:08:22.700272] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:40:54.152 [2024-10-08 19:08:22.700388] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:40:54.152 [2024-10-08 19:08:22.700406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.152 [2024-10-08 19:08:22.700418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:40:54.152 [2024-10-08 19:08:22.700429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:40:54.152 [2024-10-08 19:08:22.700444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.152 [2024-10-08 19:08:22.700542] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:40:54.152 [2024-10-08 19:08:22.700557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.152 [2024-10-08 19:08:22.700567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:40:54.152 [2024-10-08 19:08:22.700579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:40:54.152 [2024-10-08 19:08:22.700589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.152 [2024-10-08 19:08:22.723647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.152 [2024-10-08 19:08:22.723785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:40:54.152 [2024-10-08 19:08:22.723824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.032 ms 00:40:54.152 [2024-10-08 19:08:22.723835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.152 [2024-10-08 19:08:22.737744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.152 [2024-10-08 19:08:22.737778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:40:54.152 [2024-10-08 19:08:22.737790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:40:54.152 [2024-10-08 19:08:22.737804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.152 [2024-10-08 19:08:22.737893] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:40:54.152 [2024-10-08 19:08:22.738136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.152 [2024-10-08 19:08:22.738148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:40:54.152 [2024-10-08 19:08:22.738159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.244 ms 00:40:54.152 [2024-10-08 19:08:22.738169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.718 [2024-10-08 19:08:23.297667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.718 [2024-10-08 19:08:23.297876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:40:54.718 [2024-10-08 19:08:23.297906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 558.342 ms 00:40:54.718 [2024-10-08 19:08:23.297918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.718 [2024-10-08 19:08:23.303984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.718 [2024-10-08 19:08:23.304148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:40:54.718 [2024-10-08 19:08:23.304171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.198 ms 00:40:54.718 [2024-10-08 19:08:23.304183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.718 [2024-10-08 19:08:23.304610] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:40:54.718 [2024-10-08 19:08:23.304633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.718 [2024-10-08 19:08:23.304645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:40:54.718 [2024-10-08 19:08:23.304657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.411 ms 00:40:54.718 [2024-10-08 19:08:23.304668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.718 [2024-10-08 19:08:23.304709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.718 [2024-10-08 19:08:23.304722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:40:54.718 [2024-10-08 19:08:23.304734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:40:54.718 [2024-10-08 19:08:23.304744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:54.718 [2024-10-08 19:08:23.304780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 566.882 ms, result 0 00:40:54.718 [2024-10-08 19:08:23.304820] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:40:54.718 [2024-10-08 19:08:23.304904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:54.718 [2024-10-08 19:08:23.304914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:40:54.718 [2024-10-08 19:08:23.304924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:40:54.718 [2024-10-08 19:08:23.304934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.904646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.284 [2024-10-08 19:08:23.904750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:40:55.284 [2024-10-08 19:08:23.904769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 598.409 ms 00:40:55.284 [2024-10-08 19:08:23.904781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.911245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.284 [2024-10-08 19:08:23.911452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:40:55.284 [2024-10-08 19:08:23.911489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.542 ms 00:40:55.284 [2024-10-08 19:08:23.911511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.912097] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:40:55.284 [2024-10-08 19:08:23.912169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.284 [2024-10-08 19:08:23.912192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:40:55.284 [2024-10-08 19:08:23.912209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.600 ms 00:40:55.284 [2024-10-08 19:08:23.912221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.912269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.284 [2024-10-08 19:08:23.912283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:40:55.284 [2024-10-08 19:08:23.912295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:40:55.284 [2024-10-08 19:08:23.912306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.912348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 607.518 ms, result 0 00:40:55.284 [2024-10-08 19:08:23.912401] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:55.284 [2024-10-08 19:08:23.912414] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:40:55.284 [2024-10-08 19:08:23.912429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.284 [2024-10-08 19:08:23.912441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:40:55.284 [2024-10-08 19:08:23.912460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1174.546 ms 00:40:55.284 [2024-10-08 19:08:23.912471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.912506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.284 [2024-10-08 19:08:23.912518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:40:55.284 [2024-10-08 19:08:23.912531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:40:55.284 [2024-10-08 19:08:23.912542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.926597] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:40:55.284 [2024-10-08 19:08:23.926897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.284 [2024-10-08 19:08:23.926921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:40:55.284 [2024-10-08 19:08:23.926935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.336 ms 00:40:55.284 [2024-10-08 19:08:23.926947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.927639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.284 [2024-10-08 19:08:23.927662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:40:55.284 [2024-10-08 19:08:23.927675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.572 ms 00:40:55.284 [2024-10-08 19:08:23.927686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.284 [2024-10-08 19:08:23.929808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.285 [2024-10-08 19:08:23.929966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:40:55.285 [2024-10-08 19:08:23.929989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.100 ms 00:40:55.285 [2024-10-08 19:08:23.930001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.285 [2024-10-08 19:08:23.930072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.285 [2024-10-08 19:08:23.930097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:40:55.285 [2024-10-08 19:08:23.930109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:40:55.285 [2024-10-08 19:08:23.930121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.285 [2024-10-08 19:08:23.930241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.285 [2024-10-08 19:08:23.930255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:40:55.285 [2024-10-08 19:08:23.930267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:40:55.285 [2024-10-08 19:08:23.930278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.285 [2024-10-08 19:08:23.930305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.285 [2024-10-08 19:08:23.930320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:40:55.285 [2024-10-08 19:08:23.930331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:40:55.285 [2024-10-08 19:08:23.930342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.285 [2024-10-08 19:08:23.930382] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:40:55.285 [2024-10-08 19:08:23.930396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.285 [2024-10-08 19:08:23.930407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:40:55.285 [2024-10-08 19:08:23.930418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:40:55.285 [2024-10-08 19:08:23.930428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.285 [2024-10-08 19:08:23.930488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:40:55.285 [2024-10-08 19:08:23.930501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:40:55.285 [2024-10-08 19:08:23.930521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:40:55.285 [2024-10-08 19:08:23.930532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:40:55.285 [2024-10-08 19:08:23.932271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1532.276 ms, result 0 00:40:55.285 [2024-10-08 19:08:23.947748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:55.285 [2024-10-08 19:08:23.963755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:40:55.285 [2024-10-08 19:08:23.975037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:55.285 Validate MD5 checksum, iteration 1 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:40:55.285 19:08:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:40:55.543 [2024-10-08 19:08:24.135158] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:40:55.543 [2024-10-08 19:08:24.135606] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82230 ] 00:40:55.801 [2024-10-08 19:08:24.323193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:56.064 [2024-10-08 19:08:24.589204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:40:57.965  [2024-10-08T19:08:27.289Z] Copying: 575/1024 [MB] (575 MBps) [2024-10-08T19:08:29.827Z] Copying: 1024/1024 [MB] (average 578 MBps) 00:41:01.070 00:41:01.070 19:08:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:41:01.070 19:08:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:41:02.978 Validate MD5 checksum, iteration 2 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=75c68b1bf449b8f2f2b75209468bdbe6 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 75c68b1bf449b8f2f2b75209468bdbe6 != \7\5\c\6\8\b\1\b\f\4\4\9\b\8\f\2\f\2\b\7\5\2\0\9\4\6\8\b\d\b\e\6 ]] 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:41:02.978 19:08:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:41:02.978 [2024-10-08 19:08:31.674276] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:41:02.978 [2024-10-08 19:08:31.675005] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82308 ] 00:41:03.237 [2024-10-08 19:08:31.882041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:03.496 [2024-10-08 19:08:32.148065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:41:05.399  [2024-10-08T19:08:34.722Z] Copying: 627/1024 [MB] (627 MBps) [2024-10-08T19:08:36.095Z] Copying: 1024/1024 [MB] (average 615 MBps) 00:41:07.338 00:41:07.338 19:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:41:07.338 19:08:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=11d116c08b950289769ad06320a9b1e2 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 11d116c08b950289769ad06320a9b1e2 != \1\1\d\1\1\6\c\0\8\b\9\5\0\2\8\9\7\6\9\a\d\0\6\3\2\0\a\9\b\1\e\2 ]] 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82184 ]] 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82184 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82184 ']' 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 82184 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82184 00:41:09.241 killing process with pid 82184 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82184' 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 82184 00:41:09.241 19:08:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 82184 00:41:10.615 [2024-10-08 19:08:39.057034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:41:10.615 [2024-10-08 19:08:39.078422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.078479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:41:10.615 [2024-10-08 19:08:39.078511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:41:10.615 [2024-10-08 19:08:39.078528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.078553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:41:10.615 [2024-10-08 19:08:39.083171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.083204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:41:10.615 [2024-10-08 19:08:39.083218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.600 ms 00:41:10.615 [2024-10-08 19:08:39.083245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.083483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.083496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:41:10.615 [2024-10-08 19:08:39.083507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:41:10.615 [2024-10-08 19:08:39.083518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.084683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.084728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:41:10.615 [2024-10-08 19:08:39.084741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.148 ms 00:41:10.615 [2024-10-08 19:08:39.084752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.085830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.085859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:41:10.615 [2024-10-08 19:08:39.085871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.043 ms 00:41:10.615 [2024-10-08 19:08:39.085882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.101397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.101438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:41:10.615 [2024-10-08 19:08:39.101452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.455 ms 00:41:10.615 [2024-10-08 19:08:39.101463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.109788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.109826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:41:10.615 [2024-10-08 19:08:39.109840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.287 ms 00:41:10.615 [2024-10-08 19:08:39.109851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.109953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.109982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:41:10.615 [2024-10-08 19:08:39.109994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:41:10.615 [2024-10-08 19:08:39.110005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.125079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.125117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:41:10.615 [2024-10-08 19:08:39.125129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.056 ms 00:41:10.615 [2024-10-08 19:08:39.125139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.140744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.140780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:41:10.615 [2024-10-08 19:08:39.140792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.568 ms 00:41:10.615 [2024-10-08 19:08:39.140802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.155942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.155992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:41:10.615 [2024-10-08 19:08:39.156005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.104 ms 00:41:10.615 [2024-10-08 19:08:39.156016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.171191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.171230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:41:10.615 [2024-10-08 19:08:39.171242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.053 ms 00:41:10.615 [2024-10-08 19:08:39.171252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.171287] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:41:10.615 [2024-10-08 19:08:39.171304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:41:10.615 [2024-10-08 19:08:39.171318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:41:10.615 [2024-10-08 19:08:39.171329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:41:10.615 [2024-10-08 19:08:39.171341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:10.615 [2024-10-08 19:08:39.171514] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:41:10.615 [2024-10-08 19:08:39.171524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 86c06e3e-1e42-4b15-b343-0a472f3d71a8 00:41:10.615 [2024-10-08 19:08:39.171534] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:41:10.615 [2024-10-08 19:08:39.171544] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:41:10.615 [2024-10-08 19:08:39.171554] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:41:10.615 [2024-10-08 19:08:39.171570] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:41:10.615 [2024-10-08 19:08:39.171581] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:41:10.615 [2024-10-08 19:08:39.171591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:41:10.615 [2024-10-08 19:08:39.171601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:41:10.615 [2024-10-08 19:08:39.171610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:41:10.615 [2024-10-08 19:08:39.171619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:41:10.615 [2024-10-08 19:08:39.171629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.171651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:41:10.615 [2024-10-08 19:08:39.171662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:41:10.615 [2024-10-08 19:08:39.171673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.192607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.192651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:41:10.615 [2024-10-08 19:08:39.192664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.900 ms 00:41:10.615 [2024-10-08 19:08:39.192676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.193252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:41:10.615 [2024-10-08 19:08:39.193270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:41:10.615 [2024-10-08 19:08:39.193281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:41:10.615 [2024-10-08 19:08:39.193291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.252999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.615 [2024-10-08 19:08:39.253067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:41:10.615 [2024-10-08 19:08:39.253081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.615 [2024-10-08 19:08:39.253092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.253132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.615 [2024-10-08 19:08:39.253144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:41:10.615 [2024-10-08 19:08:39.253154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.615 [2024-10-08 19:08:39.253165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.253250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.615 [2024-10-08 19:08:39.253264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:41:10.615 [2024-10-08 19:08:39.253280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.615 [2024-10-08 19:08:39.253290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.615 [2024-10-08 19:08:39.253308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.615 [2024-10-08 19:08:39.253319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:41:10.615 [2024-10-08 19:08:39.253329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.615 [2024-10-08 19:08:39.253338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.873 [2024-10-08 19:08:39.374892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.873 [2024-10-08 19:08:39.375003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:41:10.873 [2024-10-08 19:08:39.375019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.873 [2024-10-08 19:08:39.375031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.873 [2024-10-08 19:08:39.476623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.873 [2024-10-08 19:08:39.476713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:41:10.873 [2024-10-08 19:08:39.476729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.873 [2024-10-08 19:08:39.476739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.873 [2024-10-08 19:08:39.476861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.873 [2024-10-08 19:08:39.476874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:41:10.873 [2024-10-08 19:08:39.476885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.873 [2024-10-08 19:08:39.476907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.873 [2024-10-08 19:08:39.476982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.873 [2024-10-08 19:08:39.477010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:41:10.873 [2024-10-08 19:08:39.477026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.873 [2024-10-08 19:08:39.477036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.873 [2024-10-08 19:08:39.477152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.873 [2024-10-08 19:08:39.477166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:41:10.873 [2024-10-08 19:08:39.477177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.873 [2024-10-08 19:08:39.477188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.873 [2024-10-08 19:08:39.477248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.873 [2024-10-08 19:08:39.477261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:41:10.873 [2024-10-08 19:08:39.477271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.873 [2024-10-08 19:08:39.477282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.873 [2024-10-08 19:08:39.477323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.873 [2024-10-08 19:08:39.477335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:41:10.873 [2024-10-08 19:08:39.477345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.873 [2024-10-08 19:08:39.477355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.873 [2024-10-08 19:08:39.477405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:41:10.874 [2024-10-08 19:08:39.477417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:41:10.874 [2024-10-08 19:08:39.477428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:41:10.874 [2024-10-08 19:08:39.477438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:41:10.874 [2024-10-08 19:08:39.477562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 399.102 ms, result 0 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:12.276 Remove shared memory files 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81963 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:41:12.276 00:41:12.276 real 1m32.339s 00:41:12.276 user 2m9.099s 00:41:12.276 sys 0m24.129s 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:12.276 19:08:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:41:12.276 ************************************ 00:41:12.276 END TEST ftl_upgrade_shutdown 00:41:12.276 ************************************ 00:41:12.276 19:08:40 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:41:12.276 19:08:40 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:41:12.276 19:08:40 ftl -- ftl/ftl.sh@14 -- # killprocess 75142 00:41:12.276 19:08:40 ftl -- common/autotest_common.sh@950 -- # '[' -z 75142 ']' 00:41:12.276 19:08:40 ftl -- common/autotest_common.sh@954 -- # kill -0 75142 00:41:12.276 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75142) - No such process 00:41:12.276 Process with pid 75142 is not found 00:41:12.276 19:08:40 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 75142 is not found' 00:41:12.276 19:08:40 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:41:12.277 19:08:40 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82434 00:41:12.277 19:08:40 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82434 00:41:12.277 19:08:40 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:12.277 19:08:40 ftl -- common/autotest_common.sh@831 -- # '[' -z 82434 ']' 00:41:12.277 19:08:40 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:12.277 19:08:40 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:12.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:12.277 19:08:40 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:12.277 19:08:40 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:12.277 19:08:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:12.535 [2024-10-08 19:08:41.124308] Starting SPDK v25.01-pre git sha1 716daf683 / DPDK 24.03.0 initialization... 00:41:12.535 [2024-10-08 19:08:41.124488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82434 ] 00:41:12.794 [2024-10-08 19:08:41.307261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.794 [2024-10-08 19:08:41.520074] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:41:13.729 19:08:42 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:13.729 19:08:42 ftl -- common/autotest_common.sh@864 -- # return 0 00:41:13.729 19:08:42 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:41:13.988 nvme0n1 00:41:13.988 19:08:42 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:41:13.988 19:08:42 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:41:13.988 19:08:42 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:14.247 19:08:42 ftl -- ftl/common.sh@28 -- # stores=7ff34663-ba01-40f5-99d2-b96d864ac527 00:41:14.247 19:08:42 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:41:14.506 19:08:43 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ff34663-ba01-40f5-99d2-b96d864ac527 00:41:14.506 19:08:43 ftl -- ftl/ftl.sh@23 -- # killprocess 82434 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@950 -- # '[' -z 82434 ']' 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@954 -- # kill -0 82434 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@955 -- # uname 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82434 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:14.506 killing process with pid 82434 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82434' 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@969 -- # kill 82434 00:41:14.506 19:08:43 ftl -- common/autotest_common.sh@974 -- # wait 82434 00:41:17.039 19:08:45 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:41:17.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:17.557 Waiting for block devices as requested 00:41:17.557 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:41:17.557 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:41:17.816 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:41:17.816 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:41:23.089 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:41:23.089 19:08:51 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:41:23.089 Remove shared memory files 00:41:23.089 19:08:51 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:41:23.089 19:08:51 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:41:23.089 19:08:51 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:41:23.089 19:08:51 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:41:23.089 19:08:51 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:41:23.089 19:08:51 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:41:23.089 00:41:23.089 real 10m49.439s 00:41:23.089 user 13m19.940s 00:41:23.089 sys 1m35.639s 00:41:23.089 19:08:51 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:23.089 19:08:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:23.089 ************************************ 00:41:23.089 END TEST ftl 00:41:23.089 ************************************ 00:41:23.089 19:08:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:41:23.089 19:08:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:41:23.089 19:08:51 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:41:23.089 19:08:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:41:23.089 19:08:51 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:41:23.089 19:08:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:41:23.089 19:08:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:41:23.089 19:08:51 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:41:23.089 19:08:51 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:41:23.089 19:08:51 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:41:23.089 19:08:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:41:23.089 19:08:51 -- common/autotest_common.sh@10 -- # set +x 00:41:23.089 19:08:51 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:41:23.089 19:08:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:41:23.089 19:08:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:41:23.089 19:08:51 -- common/autotest_common.sh@10 -- # set +x 00:41:24.992 INFO: APP EXITING 00:41:24.992 INFO: killing all VMs 00:41:24.992 INFO: killing vhost app 00:41:24.992 INFO: EXIT DONE 00:41:25.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:25.817 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:41:25.817 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:41:26.076 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:41:26.076 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:41:26.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:41:26.900 Cleaning 00:41:26.900 Removing: /var/run/dpdk/spdk0/config 00:41:26.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:26.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:26.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:26.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:26.900 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:26.900 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:26.900 Removing: /var/run/dpdk/spdk0 00:41:26.900 Removing: /var/run/dpdk/spdk_pid58145 00:41:26.900 Removing: /var/run/dpdk/spdk_pid58408 00:41:26.900 Removing: /var/run/dpdk/spdk_pid58648 00:41:26.900 Removing: /var/run/dpdk/spdk_pid58763 00:41:26.900 Removing: /var/run/dpdk/spdk_pid58830 00:41:26.900 Removing: /var/run/dpdk/spdk_pid58969 00:41:26.900 Removing: /var/run/dpdk/spdk_pid58987 00:41:26.900 Removing: /var/run/dpdk/spdk_pid59214 00:41:26.900 Removing: /var/run/dpdk/spdk_pid59337 00:41:26.900 Removing: /var/run/dpdk/spdk_pid59455 00:41:26.900 Removing: /var/run/dpdk/spdk_pid59588 00:41:26.900 Removing: /var/run/dpdk/spdk_pid59713 00:41:26.900 Removing: /var/run/dpdk/spdk_pid59758 00:41:26.900 Removing: /var/run/dpdk/spdk_pid59800 00:41:26.900 Removing: /var/run/dpdk/spdk_pid59876 00:41:26.901 Removing: /var/run/dpdk/spdk_pid60017 00:41:26.901 Removing: /var/run/dpdk/spdk_pid60491 00:41:26.901 Removing: /var/run/dpdk/spdk_pid60572 00:41:26.901 Removing: /var/run/dpdk/spdk_pid60657 00:41:26.901 Removing: /var/run/dpdk/spdk_pid60684 00:41:26.901 Removing: /var/run/dpdk/spdk_pid60853 00:41:26.901 Removing: /var/run/dpdk/spdk_pid60870 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61041 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61063 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61138 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61167 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61242 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61260 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61472 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61514 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61603 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61802 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61903 00:41:26.901 Removing: /var/run/dpdk/spdk_pid61956 00:41:26.901 Removing: /var/run/dpdk/spdk_pid62451 00:41:26.901 Removing: /var/run/dpdk/spdk_pid62555 00:41:26.901 Removing: /var/run/dpdk/spdk_pid62676 00:41:26.901 Removing: /var/run/dpdk/spdk_pid62740 00:41:26.901 Removing: /var/run/dpdk/spdk_pid62771 00:41:26.901 Removing: /var/run/dpdk/spdk_pid62860 00:41:26.901 Removing: /var/run/dpdk/spdk_pid63518 00:41:26.901 Removing: /var/run/dpdk/spdk_pid63566 00:41:26.901 Removing: /var/run/dpdk/spdk_pid64112 00:41:26.901 Removing: /var/run/dpdk/spdk_pid64220 00:41:27.159 Removing: /var/run/dpdk/spdk_pid64353 00:41:27.159 Removing: /var/run/dpdk/spdk_pid64410 00:41:27.159 Removing: /var/run/dpdk/spdk_pid64441 00:41:27.159 Removing: /var/run/dpdk/spdk_pid64468 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66379 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66550 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66554 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66566 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66621 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66625 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66637 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66687 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66693 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66705 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66755 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66759 00:41:27.159 Removing: /var/run/dpdk/spdk_pid66771 00:41:27.159 Removing: /var/run/dpdk/spdk_pid68168 00:41:27.159 Removing: /var/run/dpdk/spdk_pid68295 00:41:27.159 Removing: /var/run/dpdk/spdk_pid69725 00:41:27.159 Removing: /var/run/dpdk/spdk_pid71096 00:41:27.159 Removing: /var/run/dpdk/spdk_pid71233 00:41:27.159 Removing: /var/run/dpdk/spdk_pid71359 00:41:27.159 Removing: /var/run/dpdk/spdk_pid71488 00:41:27.159 Removing: /var/run/dpdk/spdk_pid71629 00:41:27.159 Removing: /var/run/dpdk/spdk_pid71718 00:41:27.159 Removing: /var/run/dpdk/spdk_pid71873 00:41:27.159 Removing: /var/run/dpdk/spdk_pid72256 00:41:27.159 Removing: /var/run/dpdk/spdk_pid72298 00:41:27.159 Removing: /var/run/dpdk/spdk_pid72777 00:41:27.159 Removing: /var/run/dpdk/spdk_pid72969 00:41:27.159 Removing: /var/run/dpdk/spdk_pid73082 00:41:27.159 Removing: /var/run/dpdk/spdk_pid73194 00:41:27.159 Removing: /var/run/dpdk/spdk_pid73255 00:41:27.159 Removing: /var/run/dpdk/spdk_pid73286 00:41:27.159 Removing: /var/run/dpdk/spdk_pid73583 00:41:27.159 Removing: /var/run/dpdk/spdk_pid73661 00:41:27.159 Removing: /var/run/dpdk/spdk_pid73748 00:41:27.159 Removing: /var/run/dpdk/spdk_pid74187 00:41:27.159 Removing: /var/run/dpdk/spdk_pid74335 00:41:27.159 Removing: /var/run/dpdk/spdk_pid75142 00:41:27.159 Removing: /var/run/dpdk/spdk_pid75292 00:41:27.159 Removing: /var/run/dpdk/spdk_pid75499 00:41:27.159 Removing: /var/run/dpdk/spdk_pid75603 00:41:27.159 Removing: /var/run/dpdk/spdk_pid75935 00:41:27.159 Removing: /var/run/dpdk/spdk_pid76200 00:41:27.159 Removing: /var/run/dpdk/spdk_pid76558 00:41:27.159 Removing: /var/run/dpdk/spdk_pid76767 00:41:27.159 Removing: /var/run/dpdk/spdk_pid76892 00:41:27.159 Removing: /var/run/dpdk/spdk_pid76961 00:41:27.159 Removing: /var/run/dpdk/spdk_pid77088 00:41:27.159 Removing: /var/run/dpdk/spdk_pid77130 00:41:27.159 Removing: /var/run/dpdk/spdk_pid77202 00:41:27.159 Removing: /var/run/dpdk/spdk_pid77397 00:41:27.159 Removing: /var/run/dpdk/spdk_pid77670 00:41:27.159 Removing: /var/run/dpdk/spdk_pid78028 00:41:27.159 Removing: /var/run/dpdk/spdk_pid78421 00:41:27.159 Removing: /var/run/dpdk/spdk_pid78780 00:41:27.159 Removing: /var/run/dpdk/spdk_pid79228 00:41:27.159 Removing: /var/run/dpdk/spdk_pid79376 00:41:27.159 Removing: /var/run/dpdk/spdk_pid79482 00:41:27.159 Removing: /var/run/dpdk/spdk_pid80068 00:41:27.159 Removing: /var/run/dpdk/spdk_pid80143 00:41:27.159 Removing: /var/run/dpdk/spdk_pid80547 00:41:27.159 Removing: /var/run/dpdk/spdk_pid80894 00:41:27.159 Removing: /var/run/dpdk/spdk_pid81375 00:41:27.159 Removing: /var/run/dpdk/spdk_pid81493 00:41:27.159 Removing: /var/run/dpdk/spdk_pid81561 00:41:27.159 Removing: /var/run/dpdk/spdk_pid81625 00:41:27.418 Removing: /var/run/dpdk/spdk_pid81681 00:41:27.418 Removing: /var/run/dpdk/spdk_pid81756 00:41:27.418 Removing: /var/run/dpdk/spdk_pid81963 00:41:27.418 Removing: /var/run/dpdk/spdk_pid82039 00:41:27.418 Removing: /var/run/dpdk/spdk_pid82106 00:41:27.418 Removing: /var/run/dpdk/spdk_pid82184 00:41:27.418 Removing: /var/run/dpdk/spdk_pid82230 00:41:27.418 Removing: /var/run/dpdk/spdk_pid82308 00:41:27.418 Removing: /var/run/dpdk/spdk_pid82434 00:41:27.418 Clean 00:41:27.418 19:08:56 -- common/autotest_common.sh@1451 -- # return 0 00:41:27.418 19:08:56 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:41:27.418 19:08:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:27.418 19:08:56 -- common/autotest_common.sh@10 -- # set +x 00:41:27.418 19:08:56 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:41:27.418 19:08:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:41:27.418 19:08:56 -- common/autotest_common.sh@10 -- # set +x 00:41:27.418 19:08:56 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:41:27.418 19:08:56 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:41:27.418 19:08:56 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:41:27.418 19:08:56 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:41:27.418 19:08:56 -- spdk/autotest.sh@394 -- # hostname 00:41:27.418 19:08:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:41:27.676 geninfo: WARNING: invalid characters removed from testname! 00:41:54.256 19:09:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:55.236 19:09:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:57.164 19:09:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:41:59.693 19:09:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:01.593 19:09:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:04.126 19:09:32 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:42:06.049 19:09:34 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:06.049 19:09:34 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:42:06.049 19:09:34 -- common/autotest_common.sh@1681 -- $ lcov --version 00:42:06.049 19:09:34 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:42:06.049 19:09:34 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:42:06.049 19:09:34 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:42:06.049 19:09:34 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:42:06.049 19:09:34 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:42:06.049 19:09:34 -- scripts/common.sh@336 -- $ IFS=.-: 00:42:06.049 19:09:34 -- scripts/common.sh@336 -- $ read -ra ver1 00:42:06.049 19:09:34 -- scripts/common.sh@337 -- $ IFS=.-: 00:42:06.049 19:09:34 -- scripts/common.sh@337 -- $ read -ra ver2 00:42:06.049 19:09:34 -- scripts/common.sh@338 -- $ local 'op=<' 00:42:06.049 19:09:34 -- scripts/common.sh@340 -- $ ver1_l=2 00:42:06.049 19:09:34 -- scripts/common.sh@341 -- $ ver2_l=1 00:42:06.049 19:09:34 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:42:06.049 19:09:34 -- scripts/common.sh@344 -- $ case "$op" in 00:42:06.049 19:09:34 -- scripts/common.sh@345 -- $ : 1 00:42:06.049 19:09:34 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:42:06.049 19:09:34 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:06.049 19:09:34 -- scripts/common.sh@365 -- $ decimal 1 00:42:06.049 19:09:34 -- scripts/common.sh@353 -- $ local d=1 00:42:06.049 19:09:34 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:42:06.050 19:09:34 -- scripts/common.sh@355 -- $ echo 1 00:42:06.050 19:09:34 -- scripts/common.sh@365 -- $ ver1[v]=1 00:42:06.050 19:09:34 -- scripts/common.sh@366 -- $ decimal 2 00:42:06.050 19:09:34 -- scripts/common.sh@353 -- $ local d=2 00:42:06.050 19:09:34 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:42:06.050 19:09:34 -- scripts/common.sh@355 -- $ echo 2 00:42:06.050 19:09:34 -- scripts/common.sh@366 -- $ ver2[v]=2 00:42:06.050 19:09:34 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:42:06.050 19:09:34 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:42:06.050 19:09:34 -- scripts/common.sh@368 -- $ return 0 00:42:06.050 19:09:34 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:06.050 19:09:34 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:42:06.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.050 --rc genhtml_branch_coverage=1 00:42:06.050 --rc genhtml_function_coverage=1 00:42:06.050 --rc genhtml_legend=1 00:42:06.050 --rc geninfo_all_blocks=1 00:42:06.050 --rc geninfo_unexecuted_blocks=1 00:42:06.050 00:42:06.050 ' 00:42:06.050 19:09:34 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:42:06.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.050 --rc genhtml_branch_coverage=1 00:42:06.050 --rc genhtml_function_coverage=1 00:42:06.050 --rc genhtml_legend=1 00:42:06.050 --rc geninfo_all_blocks=1 00:42:06.050 --rc geninfo_unexecuted_blocks=1 00:42:06.050 00:42:06.050 ' 00:42:06.050 19:09:34 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:42:06.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.050 --rc genhtml_branch_coverage=1 00:42:06.050 --rc genhtml_function_coverage=1 00:42:06.050 --rc genhtml_legend=1 00:42:06.050 --rc geninfo_all_blocks=1 00:42:06.050 --rc geninfo_unexecuted_blocks=1 00:42:06.050 00:42:06.050 ' 00:42:06.050 19:09:34 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:42:06.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:06.050 --rc genhtml_branch_coverage=1 00:42:06.050 --rc genhtml_function_coverage=1 00:42:06.050 --rc genhtml_legend=1 00:42:06.050 --rc geninfo_all_blocks=1 00:42:06.050 --rc geninfo_unexecuted_blocks=1 00:42:06.050 00:42:06.050 ' 00:42:06.050 19:09:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:06.050 19:09:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:42:06.050 19:09:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:42:06.050 19:09:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:06.050 19:09:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:06.050 19:09:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.050 19:09:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.050 19:09:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.050 19:09:34 -- paths/export.sh@5 -- $ export PATH 00:42:06.050 19:09:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:06.050 19:09:34 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:42:06.050 19:09:34 -- common/autobuild_common.sh@486 -- $ date +%s 00:42:06.309 19:09:34 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728414574.XXXXXX 00:42:06.309 19:09:34 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728414574.thoyDn 00:42:06.309 19:09:34 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:42:06.309 19:09:34 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:42:06.309 19:09:34 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:42:06.309 19:09:34 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:42:06.309 19:09:34 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:42:06.309 19:09:34 -- common/autobuild_common.sh@502 -- $ get_config_params 00:42:06.309 19:09:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:42:06.309 19:09:34 -- common/autotest_common.sh@10 -- $ set +x 00:42:06.309 19:09:34 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:42:06.309 19:09:34 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:42:06.309 19:09:34 -- pm/common@17 -- $ local monitor 00:42:06.309 19:09:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:06.309 19:09:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:06.309 19:09:34 -- pm/common@25 -- $ sleep 1 00:42:06.309 19:09:34 -- pm/common@21 -- $ date +%s 00:42:06.309 19:09:34 -- pm/common@21 -- $ date +%s 00:42:06.309 19:09:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728414574 00:42:06.309 19:09:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728414574 00:42:06.309 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728414574_collect-cpu-load.pm.log 00:42:06.309 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728414574_collect-vmstat.pm.log 00:42:07.245 19:09:35 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:42:07.245 19:09:35 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:42:07.245 19:09:35 -- spdk/autopackage.sh@14 -- $ timing_finish 00:42:07.245 19:09:35 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:07.245 19:09:35 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:42:07.245 19:09:35 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:42:07.245 19:09:35 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:42:07.245 19:09:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:42:07.245 19:09:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:42:07.245 19:09:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:07.245 19:09:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:42:07.245 19:09:35 -- pm/common@44 -- $ pid=84168 00:42:07.245 19:09:35 -- pm/common@50 -- $ kill -TERM 84168 00:42:07.245 19:09:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:42:07.245 19:09:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:42:07.245 19:09:35 -- pm/common@44 -- $ pid=84169 00:42:07.245 19:09:35 -- pm/common@50 -- $ kill -TERM 84169 00:42:07.245 + [[ -n 5304 ]] 00:42:07.245 + sudo kill 5304 00:42:07.254 [Pipeline] } 00:42:07.269 [Pipeline] // timeout 00:42:07.273 [Pipeline] } 00:42:07.286 [Pipeline] // stage 00:42:07.289 [Pipeline] } 00:42:07.303 [Pipeline] // catchError 00:42:07.308 [Pipeline] stage 00:42:07.310 [Pipeline] { (Stop VM) 00:42:07.318 [Pipeline] sh 00:42:07.595 + vagrant halt 00:42:10.881 ==> default: Halting domain... 00:42:17.451 [Pipeline] sh 00:42:17.730 + vagrant destroy -f 00:42:21.012 ==> default: Removing domain... 00:42:21.283 [Pipeline] sh 00:42:21.564 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:42:21.572 [Pipeline] } 00:42:21.587 [Pipeline] // stage 00:42:21.592 [Pipeline] } 00:42:21.607 [Pipeline] // dir 00:42:21.612 [Pipeline] } 00:42:21.625 [Pipeline] // wrap 00:42:21.631 [Pipeline] } 00:42:21.643 [Pipeline] // catchError 00:42:21.653 [Pipeline] stage 00:42:21.655 [Pipeline] { (Epilogue) 00:42:21.668 [Pipeline] sh 00:42:21.977 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:28.576 [Pipeline] catchError 00:42:28.578 [Pipeline] { 00:42:28.589 [Pipeline] sh 00:42:28.870 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:29.128 Artifacts sizes are good 00:42:29.138 [Pipeline] } 00:42:29.152 [Pipeline] // catchError 00:42:29.162 [Pipeline] archiveArtifacts 00:42:29.170 Archiving artifacts 00:42:29.282 [Pipeline] cleanWs 00:42:29.293 [WS-CLEANUP] Deleting project workspace... 00:42:29.293 [WS-CLEANUP] Deferred wipeout is used... 00:42:29.306 [WS-CLEANUP] done 00:42:29.307 [Pipeline] } 00:42:29.322 [Pipeline] // stage 00:42:29.326 [Pipeline] } 00:42:29.340 [Pipeline] // node 00:42:29.345 [Pipeline] End of Pipeline 00:42:29.381 Finished: SUCCESS